Re: [Haskell-cafe] A question regarding reading CPP definitions from a C header
If you use cpphs as a library, there is an API called runCpphsReturningSymTab. Thence you can throw away the actual pre-preprocessed result text, keep only the symbol table, and lookup whatever macros you wish to find their values. I suggest you make this into a little code-generator, to produce a Haskell module containing the values you need. On 5 Oct 2013, at 21:37, Ömer Sinan Ağacan wrote: > Hi all, > > Let's say I want to #include a C header file in my Haskell library > just to read some macro definitions. The C header file also contains > some C code. Is there a way to load only macro definitions and not C > code in #include declarations in Haskell? > > What I'm trying to do is I'm linking my library against this C library > but I want to support different versions of this C library, so I want > to read it's version from one of it's header files. The problem is the > header file contains some C code and makes my Haskell source code > mixed with C source before compilation. > > Any suggestions would be appreciated, ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] cpphs calls error when it finds an #error declaration
On 27 Aug 2013, at 08:33, Niklas Hambüchen wrote: > @Malcolm, would you mind a change towards throwing an exception that is > different from error so that it can be easily caught, or even better, a > change from > >runCpphs :: ... -> IO String > > to > >runCpphs :: ... -> IO (Either String String) > > or similar? Have you tried simply wrapping the call to runCpphs in a "catch"? Something like safeRunCpphs :: ... -> IO (Either String String) safeRunCpphs foo = fmap Right (runCpphs foo) `catch` (\(UserError s)-> Left s > If an exception based interface is kept, it would be nice to add some > haddock to `runCpphs`; not knowing about the existence of #error, it is > easy to assume that the IO is only used for accessing the FilePath > passed in. The IO is used also for reading #include'd files, of course. I'd be happy to add some extra documentation making the behaviour on #error clearer, if you can suggest some text that would have helped you. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Why GHC is written in Happy and not a monadic parser library?
On 3 Aug 2013, at 21:03, Jason Dagit wrote: > Another con of using parsec that I forgot to mention in my previous > email is that with Parsec you need to be explicit about backtracking > (use of try). Reasoning about the correct places to put try is not > always easy and parsec doesn't help you with the task. In my > experience, this is the main bug that people run into when using > parsec. Although the original question did not mention parsec explicitly, I find it disappointing that many people immediately think of it as the epitome of monadic combinator parsing. The power of good marketing, eh? There are so many other good parsing libraries out there. Parsec happened to cure some known space-leaks in rival libraries about the time of its release (2000 or so), but the main reason it is popular is simply because it was distributed alongside ghc for a long time. Curiously enough, the complaint you make about parsec is exactly the same as the observation that drove the development of my own set of combinators - polyparse. But the concept of commitment to a partial parse (to prevent backtracking) was already present somewhat in Röjemo's applicative parsers way back in 1994. He had both `ap` and `apCut` (the naming of "cut" borrowed from Prolog I suppose). Space performance and the elimination of space leaks was totally his focus: he mentions being able to recompile the compiler itself in just 3Mb of RAM. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Why GHC is written in Happy and not a monadic parser library?
On 3 Aug 2013, at 02:20, Jason Dagit wrote: >> Hi! >> Is there any specific reason why GHC is written in a parser GENERATOR >> (Happy) and not in MONADIC PARSER COMBINATOR (like parsec)? >> >> Is Happy faster / handles better errors / hase some great features or >> anything else? > > One reason is that it predates monadic parser libraries. I'm not entirely sure this is true. I reckon the development of applicative parser combinators (used in the implementation of the nhc12 compiler, way back in 1995 or so), is roughly contemporaneous with the development of Happy, and its use inside ghc. (I found a release note from Sept 1997 that said ghc had just converted its interface-file parser to use Happy.) Certainly table-driven parsers in non-functional languages go back a lot further, and functional combinator-based parsing was then the relative newcomer. As to why ghc switched to Happy, the literature of the time suggests that generated table-driven parsers were faster than combinator-based parsers. I'm not sure I have ever seen any performance figures to back that up however. And with the general improvement in performance of idiomatic Haskell over the last twenty years, I'd be interested to see a modern comparison. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] "Casting" newtype to base type?
On 1 Jul 2013, at 16:07, Vlatko Basic wrote: > I had a (simplified) record > > data P = P { >a :: String, >b :: String, >c :: IO String >} deriving (Show, Eq) > > but to get automatic deriving of 'Show' and 'Eq' for 'data P' I have created > 'newtype IOS' and its 'Show' and 'Eq' instances > > newtype IOS = IO String Not quite! That is a newtype'd String, not a newtype's (IO String). Try this: newtype IOS = IOS (IO String) > but now when I try to set 'c' field in > >return $ p {c = readFile path} > > I get error > Couldn't match expected type `IOS' with actual type `IO String' Use the newtype constructor to convert an IO String -> IOS. return $ p {c = IOS $ readFile path} Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] code to HTML
On 3 Jun 2013, at 20:38, Corentin Dupont wrote: > I'd like to transform a .hs file into a .html file. > The objective is that the .html file, when rendered, looks exactly the same > that the .hs, with the exeption that every function in the code is a link to > its haddock documentation. > Is that possible? Programatica could do this ten years ago. But it sort-of fell by the wayside. Maybe there is some code still available that could be dusted off and revived: http://ogi.altocumulus.org/~hallgren/h2h.html Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] GSoC proposal: Haskell AST-based refactoring and API upgrading tool
On 29 Apr 2013, at 07:00, Niklas Hambüchen wrote: > I would like to propose the development of source code refactoring tool > that operates on Haskell source code ASTs and lets you formulate rewrite > rules written in Haskell. Seen this? http://www.haskell.org/haskellwiki/HaRe Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Prolog-style patterns
On 9 Apr 2013, at 14:46, Sturdy, Ian wrote: > As far as the use of Eq goes, Eq is already enshrined in pattern matching by > pattern matching against literals. Not true. Pattern-matching literals explicitly avoids any use of Eq. Demonstration: data Foo = Foo | Bar instance Eq Foo where _ == _ = True isFoo Foo = True isFoo Bar = False main = do print (isFoo Bar) print (Foo==Bar) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] GSoC Project Proposal: Markdown support for Haddock
On 8 Apr 2013, at 14:52, Roman Cheplyaka wrote: >> In my opinion, it is perfectly valid to have intentional preprocessor >> directives inside Haskell comments. > > Could you give an example where this is useful? > ... macro expansions inside the comments are rather exotic. {- | Some module documentation. #define WEBSITE http://some.really.rather.long/and/tedious/URL/that_I_dont_want_to_type_too_often You can find more information about Foo at WEBSITE/Foo and Bar at WEBSITE/Bar -} As you say, the #define could equally live outside the comment, but I don't see why we should have an arbitrary restriction that it _must_ live outside the comment. As you also say, "the liberty to write whatever one wants inside a comment feels important", and if that includes the intentional use of CPP, why not? Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] GSoC Project Proposal: Markdown support for Haddock
And cpphs strips C comments too. :-) But seriously, John's use-case is the exact opposite of what you suggest. John wants to keep the # inside the comment block. You suggest to remove the comment-block altogether? When I checked the example with cpphs, it turns out that the # line is retained, generating a warning but not an error. I expect Gnu cpp is the software that throws an error. In my opinion, it is perfectly valid to have intentional preprocessor directives inside Haskell comments. Regards, Malcolm On 7 Apr 2013, at 22:12, Roman Cheplyaka wrote: > Looks like a bug in cpphs to me (CC'ing Malcolm). It should respect > comments. E.g. GNU cpp strips C comments. > > Roman > > * John MacFarlane [2013-04-05 16:04:32-0700] >> I like markdown and use it all the time. While I acknowledge the >> problems that have been pointed out, markdown has the advantage of being >> easily readable "as it is" in the source document, and not looking like >> markup. >> >> But I do want to point out one problem with markdown as a format for >> documentation in Haskell files. Consider: >> >> >> module MyModule >> {- >> # Introduction >> >> This is my module >> -} >> where >> import System.Environment >> >> main = getArgs >>= print >> >> >> Now try to compile with -cpp, and you'll get an error because of the '#' >> in column 1. '#' in column 1 is common in markdown (and even >> indispensible for level 3+ headers). >> >> One could work around this by disallowing level 3+ headers, by allowing >> the headers to be indented, or by introducing new setext-like syntax for >> level 3+ headers, but it is a problem for the idea of using a markdown >> SUPERset. >> >> John >> >> +++ dag.odenh...@gmail.com [Apr 05 13 21:59 ]: >>> I forgot the mention the craziness with the *significant trailing >>> whitespace*. >>> >>> On Fri, Apr 5, 2013 at 9:49 PM, [1]dag.odenh...@gmail.com >>> <[2]dag.odenh...@gmail.com> wrote: >>> >>> Personally I think Markdown sucks, although perhaps less than Haddock >>> markup. >>> Still: >>> * No document meta data >>> * No code block meta data like language for syntax highlighting >>> * No tables >>> * No footnotes >>> * HTML fallback is insecure >>> * Confusing syntax (is it []() or ()[] for links?) >>> * Syntax that gets in the way (maybe I don't want *stars* emphasized) >>> * Above point leads to non-standard dialects like "GitHub Markdown" >>> (no, GitHub doesn't use markdown) >>> * Not extensible, leading to even more non-standard hacks and >>> work-arounds (GitHub Markdown, Pandoc Markdown, other Markdown >>> libraries have their own incompatible extensions) >>> * Not well suited for web input (e.g. four-space indentation for code >>> blocks), although not that important for Haddock >>> An important thing to note here is that no, Markdown has *not* won >>> because no one is actually using *Markdown*. They're using their own, >>> custom and incompatible dialects. >>> Only two of the above points apply to reStructuredText (web input and >>> syntax getting in the way), and those particular points don't apply to >>> Creole. Therefore I tend to advocate Creole for web applications and >>> reStructuredText for documents. >>> On Thu, Apr 4, 2013 at 6:49 PM, Johan Tibell >>> <[3]johan.tib...@gmail.com> wrote: >>> >>> Hi all, >>> Haddock's current markup language leaves something to be desired >>> once >>> you want to write more serious documentation (e.g. several >>> paragraphs >>> of introductory text at the top of the module doc). Several features >>> are lacking (bold text, links that render as text instead of URLs, >>> inline HTML). >>> I suggest that we implement an alternative haddock syntax that's a >>> superset of Markdown. It's a superset in the sense that we still >>> want >>> to support linkifying Haskell identifiers, etc. Modules that want to >>> use the new syntax (which will probably be incompatible with the >>> current syntax) can set: >>> {-# HADDOCK Markdown #-} >>> on top of the source file. >>> Ticket: [4]http://trac.haskell.org/haddock/ticket/244 >>> -- Johan >>> ___ >>> Haskell-Cafe mailing list >>> [5]Haskell-Cafe@haskell.org >>> [6]http://www.haskell.org/mailman/listinfo/haskell-cafe >>> >>> References >>> >>> 1. mailto:dag.odenh...@gmail.com >>> 2. mailto:dag.odenh...@gmail.com >>> 3. mailto:johan.tib...@gmail.com >>> 4. http://trac.haskell.org/haddock/ticket/244 >>> 5. mailto:Haskell-Cafe@haskell.org >>> 6. http://www.haskell.org/mailman/listinfo/haskell-cafe >> >>> ___ >>> Haskell-Cafe mailing list >>> Haskell-Cafe@haskell.org >>> http://www.haskell.org/mailman/listinfo/haskell-cafe >> >> >> ___
haskell-cafe@haskell.org
On 1 Apr 2013, at 01:21, Seth Lastname wrote: > Note 2 says, "If the first token after a 'where' (say) is not indented more > than the enclosing layout context, then the block must be empty, so empty > braces are inserted." > > It seems that, in Note 2, the "first token" necessarily refers to a lexeme > other than '{' (else it would not make sense), Correct. > in which case a '{n}' token will have been inserted after 'where' (in the > example given in the note), yielding a nested context which is "not indented > more than the enclosing layout context", Yes, a "{n}" token has been inserted after the "where". No, it does not yield an incorrectly nested context, because L is the function that decides whether to add to the context. Looking only at the three equations for L that deal with the pseudo-token "{n}", including their side conditions, we see: L ({n} : ts) (m : ms) = { : (L ts (n : m : ms)) if n > m (Note 1) L ({n} : ts) [] = { : (L ts [n]) if n > 0 (Note 1) L ({n} : ts) ms = { : } : (L (< n >: ts) ms) (Note 2) So, the third clause is triggered either when the nested-context stack (ms) is empty and n is zero or negative; or when the context stack is non-empty and nhttp://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] cabal install ghc-mod installs 3 years old version
Doesn't Cabal tend to install library packages under the .cabal folder? So blowing it away gets rid of the problematic ones. (And everything else.) On 25 Feb 2013, at 16:56, Brent Yorgey wrote: > On Sun, Feb 24, 2013 at 02:33:55PM +, Niklas Hambüchen wrote: >> You are right, my "ghc-7.4.2" was broken in ghc-pkg list; I fixed the >> problem by killing my .cabal folder (as so often). > > Surely you mean by killing your .ghc folder? I do not see what effect > killing your .cabal folder could possibly have on broken packages. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: lazy-csv - the fastest and most space-efficient parser for CSV
On 25 Feb 2013, at 11:14, Oliver Charles wrote: > Obvious question: How does this compare to cassava? Especially cassava's > Data.CSV.Incremental module? I specifically ask because you mention that it's > " It is lazier, faster, more space-efficient, and more flexible in its > treatment of errors, than any other extant Haskell CSV library on Hackage" > but there is no mention of cassava in the website. Simple answer - I have never heard of cassava, and suspect it did not exist when I first did the benchmarking. I'd be happy to re-do my performance comparison, including cassava and any other recent-ish CSV libraries, if I can find them. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANN: lazy-csv - the fastest and most space-efficient parser for CSV
There are lots of Haskell CSV parsers out there. Most have poor error-reporting, and do not scale to large inputs. I am pleased to announce an industrial-strength library that is robust, fast, space-efficient, lazy, and scales to gigantic inputs with no loss of performance. http://code.haskell.org/lazy-csv/ Downloads from Hackage: http://hackage.haskell.org/package/lazy-csv This library has been in industrial use for several years now, but this is the first public release. No doubt the API is not as general as it could be, but it already serves many purposes very well. I'm happy to receive bug reports and suggestions for improvements. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Call for Nominations: Haskell Prime language committee
Dear Haskell lovers, The Haskell Prime process for standardisation of new versions of the Haskell language is at something of an impasse. Since the Haskell 2010 Report was issued (at the end of 2009), there has been very little momentum to formalise existing extensions and generalisations, nor appetite to decide on whether any such extensions should be adopted as part of the core language standard. We therefore seek nominations for new members of the Haskell Prime language committee; people who have the enthusiasm to take this forward, as well as some relevant experience. If you think you would like to contribute, please nominate yourself by sending a email to haskell-2011-commit...@haskell.org describing who you are, and what you think the committee might hope to achieve. The address above is populated by the existing committee members, but is now (temporarily) open to all during the nomination period. Nominations close in 3 weeks time, i.e. to be received by end of Sunday 24th February 2013. The process for deciding a new language committee is described at http://hackage.haskell.org/trac/haskell-prime/wiki/Committee Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hackage suggestion: Gather the list of the licenses of all dependencies of a package
On 15 Dec 2012, at 16:54, Michael Snoyman wrote: > I would strongly recommend reconsidering the licensing decision of cpphs. > Even if the LICENSE-commercial is sufficient for non-source releases of > software to be protected[1], it introduces a very high overhead for companies > to need to analyze a brand new license. Many companies have already decided > BSD3, MIT, and a number of other licenses are acceptable. Well, if a company is concerned enough to make an internal policy on open source licences at all, one might hope that they would perform due diligence on them too. For instance, the FSF have lawyers, and have done enough legal work to be able to classify 48 licences as both "free" and GPL-compatible, a further 39 licences as "free" but non-GPL-compatible, and 27 open source licences that are neither "free" nor GPL-compatible. This kind of understanding is what lawyers are supposed to be for. Making them look at another (short) licence is not really a big deal, especially when it closely resembles BSD, which they have already supposedly decided is good. My suspicion, though, is that most of the companies who even think about this question are small, do not have their own lawyers, and are making policy on the hoof, motivated purely by fear. I also suspect that they do not even have the resources to read the licence for each library in its entirety, to determine whether it is in fact BSD3 or MIT, as claimed, or whether someone has subtly altered it. Also, I think I could be pretty confident that there are many shipping products that contain genuine BSD-licensed code, but which do not comply with its terms. > It could be very difficult to explain to a company, "Yes, we use this > software which says it's LGPL, but it has this special extra license which, > if I'm reading it correctly, means you can't be sued, but since the author of > the package wrote it himself, I can't really guarantee what its meaning would > be in a court of law." Like I say, if someone claims the software to be BSD-licensed, someone has to read the licence text itself anyway, to determine whether the claim is true. Pretty much every copy of the BSD licence text differs anyway, at least by the insertion of the authors' names in various places, and sometimes there are varying numbers of clauses. > Looking at the list of reverse dependencies[2], I see some pretty heavy > hitters. Via haskell-src-exts[3] we end up with 75 more reverse dependencies. > I'd also like to point out that cpphs is the only non-permissively-licensed > dependency for a large number of packages. I'm glad that cpphs is widely used. I'm also glad that it remains free, and I disagree with you that its dual-licence model is non-permissive. I would like to encourage more Haskell developers to adopt free licensing. Don't be bullied by BSD evangelists! BSD is not the only way to a good citizen of the community! Your libraries can be delivered to clients as products, without you having to give up all rights in them! It's not like I'm saying to companies "if you make money out of my code, you have to pay me a fee". All I'm saying, to everyone, is "if you notice a bug in my code and fix it, tell me". This is fully compatible with allowing people to release my code to their clients inside products. > I can give you more detailed information about my commercial experience > privately. But I can tell you that, in the currently situation, I have > created projects for clients for which Fay[4] would not be an option due to > the cpphs licensing issue. If you are complaining about the crazy policies that many companies adopt about the use of free software within their business, then I have plenty of sympathy for that too. I know of one which has a policy of "no use of open source code whatsoever", but runs thousands of linux servers. :-) Also, many companies with large numbers of software engineers on staff apparently prefer to buy crappy commercial products and pay handsomely for non-existent support, instead of running high-quality open-source software with neither initial nor ongoing costs, and where bugfixes are often available the same day as you report the bug. But hey ho. Corporate policy is usually made by people with neither technical nor legal expertise. As regards cpphs, if you don't want to use it because of its licences, that is your choice. You can always use some other implementation of the C pre-processor if you wish. GHC has always refused to distribute cpphs, on the basis of its GPL licence, and instead chose to distribute GNU's gcc on Windows. :-) (I hope you see the irony!) Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hackage suggestion: Gather the list of the licenses of all dependencies of a package
On 13 Dec 2012, at 10:41, Petr P wrote: > In particular, we can have a BSD package that depends on a LGPL package, and > this is fine for FOSS developers. But for a commercial developer, this can be > a serious issue that is not apparent until one examines *every* transitive > dependency. This might a good time to remind everyone that every single program compiled by a standard GHC is linked against an LGPL library (the Gnu multi-precision integer library) - unless you take care first to build your own copy of the compiler against the integer-simple package instead of integer-gmp. As far as I know, there are no ready-packaged binary installers for GHC that avoid this LGPL'd dependency. http://hackage.haskell.org/trac/ghc/wiki/ReplacingGMPNotes Just saying. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hackage suggestion: Gather the list of the licenses of all dependencies of a package
On 13 Dec 2012, at 18:40, Michael Snoyman wrote: > I'm not quite certain what to make of: > > If you have a commercial use for cpphs, and feel the terms of the (L)GPL > are too onerous, you have the option of distributing unmodified binaries > (only, not sources) under the terms of a different licence (see > LICENCE-commercial). > > It seems like that's saying "if you really want to, use the BSD license > instead." But I'm not sure what the legal meaning of "If you have a > commercial use" is. Malcolm: could you clarify what the meaning is? No, the LICENCE-commercial is not BSD. Read it more carefully. :-) So, I dual-licensed cpphs (which was originally only LGPL as a library, GPL as a binary), in response to a request from a developer (working for a company) who wished to use it as a library linked into their own software (rather than a standalone executable), but who was unable to convince his boss that LGPL would be acceptable. IIRC, the software was going to end up in some gadget to be sold (and therefore the code was being distributed, indirectly). The commercial licence I provided for him was intended to uphold the spirit of the LGPL, without going as far as BSD in laxity. So, if you simply want to use cpphs in a distributed product (but not modify it), it is very easy. The moment you want to distribute a modified version, you must abide by the LGPL, which to me essentially means that you contribute back your changes to the community. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] education or experience?
On 9 Dec 2012, at 16:31, Doug McIlroy wrote: > In fact the FP community came late to some of these, just as > programming languages at large came late to garbage collection. > > Lazy evaluation--at the heart of spreadsheets since the beginning. Lazy evaluation for the lambda calculus - 1971 (Wadsworth) Lazy evaluation in a programming language - 1976 (Henderson&Morris, Friedman&Wise) I wouldn't call those dates late, especially since VisiCalc, the first widely-used electronic spreadsheet entered the market in 1978. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Is it possible to have constant-space JSON decoding?
See also the incremental XML parser in HaXml, described in "Partial parsing: combining choice with commitment", IFL 2006. It has constant space usage (for some patterns of usage), even with extremely large inputs. http://www.google.co.uk/url?sa=t&rct=j&q=malcolm+wallace+partial+parsing&source=web&cd=2&ved=0CEEQFjAB&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.135.7512%26rep%3Drep1%26type%3Dpdf&ei=Db3BUNmiOsfS4QTAkoDYAw&usg=AFQjCNHHywUCvaFv8eBoQ-x9jj4GOMHo2w On 5 Dec 2012, at 05:37, Johan Tibell wrote: > Hi Oleg, > > On Tue, Dec 4, 2012 at 9:13 PM, wrote: >> I am doing, for several months, constant-space processing of large XML >> files using iteratees. The file contains many XML elements (which are >> a bit complex than a number). An element can be processed >> independently. After the parser finished with one element, and dumped >> the related data, the processing of the next element starts anew, so >> to speak. No significant state is accumulated for the overall parsing >> sans the counters of processed and bad elements, for statistics. XML >> is somewhat like JSON, only more complex: an XML parser has to deal >> with namespaces, parsed entities, CDATA sections and the other >> interesting stuff. Therefore, I'm quite sure there should not be >> fundamental problems in constant-space parsing of JSON. >> >> BTW, the parser itself is described there >>http://okmij.org/ftp/Streams.html#xml > > It certainly is possible (using a SAX style parser). What you can't > have (I think) is a function: > >decode :: FromJSON a => ByteString -> Maybe a > > and constant-memory parsing at the same time. The return type here > says that we will return Nothing if parsing fails. We can only do so > after looking at the whole input (otherwise how would we know if it's > malformed). > > The use cases aeson was designed for (which I bet is the majority use > case) is parsing smaller messages sent over the network (i.e. in web > service APIs) so this is the only mode of parsing it supplies. > > -- Johan > > ___ > Haskell-Cafe mailing list > Haskell-Cafe@haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] tplot (out of memory)
For the record, it turned out that the key difference between the linux machines was the fonts packages installed via RPM. The strace utility told me that the crash happened shortly after cairo/pango attempted (and failed) to open some font configuration files. After installing some of the X11 font packages (and some others too), the crash went away. On 18 Oct 2012, at 09:55, malcolm.wallace wrote: > Did you ever solve this? I have a similar message ( user error (out of > memory) ) arising from a different app (not tplot) that uses the Haskell > Chart library (and cairo underneath). On some linux machines, it crashes, on > others it works fine. I can find no environment differences between the > machines. The app does not use a lot of memory, and the machine is not > running out of physical or swap. > Regards, > Malcolm > > > On 04 Sep, 2012,at 04:01 PM, Eugene Kirpichov wrote: > >> Hi Manish, >> >> Please provide the input file, I'll debug this. >> >> On Mon, Sep 3, 2012 at 1:06 PM, Manish Trivedi wrote: >> > Hi, >> > >> > I am running into a weird out of memory issue. While running timeplot over >> > an input file having ~800 rows. From below provided info, seems like >> > machine >> > has enough ram (1849MB). >> > Please let me know if anyone has pointers. >> > >> > # free -m >> > total used free shared buffers cached >> > Mem: 3825 1975 1849 0 13 71 >> > -/+ buffers/cache: 1891 1934 >> > Swap: 4031 111 3920 >> > >> > #time tplot -o out.png -or 1024x768 -k 'CurrentPerHour' 'lines' -k >> > 'RequiredPerHour' 'lines' -if adgroup_delivery_chart.input -tf 'date >> > %Y-%m-%d %H:%M:%OS' >> > >> > tplot: user error (out of memory) >> > >> > real 0m0.026s >> > user 0m0.018s >> > sys 0m0.008s >> > >> > -Manish >> > >> > ___ >> > Haskell-Cafe mailing list >> > Haskell-Cafe@haskell.org >> > http://www.haskell.org/mailman/listinfo/haskell-cafe >> > >> >> >> >> -- >> Eugene Kirpichov >> http://www.linkedin.com/in/eugenekirpichov >> >> ___ >> Haskell-Cafe mailing list >> Haskell-Cafe@haskell.org >> http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Survey: What are the more common Haskell IDEs in use ?
At my workplace, most people who code in Haskell use MS Visual Studio as their Haskell IDE. :-) But they don't read Haskell-cafe... Regards, Malcolm On 24 Nov 2012, at 07:28, Dan wrote: > Because I see there are many preferences on what IDE to use for Haskell > I've created a quick survey on this topic. > > > (if any is missing, etc) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] GHC maintenance on Arch
I think you will find that the Original Poster did not ask about ArchHaskell, but rather about Haskell on the Arch platform. He was completely unaware of ArchHaskell as a project. This might be a source of some confusion, and help to explain divergent attitudes. Regards, Malcolm On 29 Oct 2012, at 14:41, Magnus Therning wrote: > Please stay on topic, this is *not* a discussion about Haskell > Platform[1], it's a discussion on ArchHaskell[2]. Please read up on > the mailing list archives first, and then, if you still feel there's a > need to discuss HP in ArchHaskell (which isn't the same thing as Arch > itself) then please start a new thread. > > /M > > [1]: http://www.haskell.org/platform/ > [2]: https://wiki.archlinux.org/index.php/ArchHaskell > > On Mon, Oct 29, 2012 at 2:53 PM, Brandon Allbery wrote: >> On Mon, Oct 29, 2012 at 5:56 AM, Magnus Therning >> wrote: >>> >>> Now I'm going to run the risk of upsetting you quite a bit by being >>> completely blunt. >> >> >> Indeed. >> >>> >>> You come across in your mail like someone who has thought through your >>> own situation, but fail to see the larger picture. You do know *your* >> >> >> May I ask you a question, then? >> >> Does the Haskell Platform have any reason to exist? >> >> Supposedly, the Haskell community backs the Haskell Platform as the way that >> most users should be using the Platform. Yet we have here a vendor platform >> which does not support it, and newcomers who notice this and question it are >> chastised for not thinking about the needs of other people. This suggests >> that the Haskell Platform is unimportant and perhaps disruptive to some >> significant group of people... is this so? >> >> And then, looking at your own message, I must ask: have you considered that >> the Platform is aimed at the great many people who do not have large amounts >> of expertise maintaining their own personal Haskell ecosystem. Or are your >> needs so important that these people must in fact be told to deal? >> >> Or, to phrase in your own words: >> >>> You come across in your mail like someone who has thought through your >>> own situation, but fail to see the larger picture. >> >> >> -- >> brandon s allbery kf8nh sine nomine associates >> allber...@gmail.com ballb...@sinenomine.net >> unix/linux, openafs, kerberos, infrastructure http://sinenomine.net >> > > > > -- > Magnus Therning OpenPGP: 0xAB4DFBA4 > email: mag...@therning.org jabber: mag...@therning.org > twitter: magthe http://therning.org/magnus > > ___ > Haskell-Cafe mailing list > Haskell-Cafe@haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Optimal line length for haskell
It is kind of ironic that the wide code examples in the blog post are wrapped at 65 chars by the blog formatting. Regards, Malcolm On 29 Oct 2012, at 11:50, Rustom Mody wrote: > There was a recent discussion on the python list regarding maximum line > length. > It occured to me that beautiful haskell programs tend to be plump (ie have > long lines) compared to other languages whose programs are 'skinnier'. > My thoughts on this are at > http://blog.languager.org/2012/10/layout-imperative-in-functional.html. > > Are there more striking examples than the lexer from the standard prelude? > [Or any other thoughts/opinions :-) ] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Either Monad and Laziness
On 12 Sep 2012, at 16:04, Eric Velten de Melo wrote: The behaviour I want to achieve is like this: I want the program when compiled to read from a file, parsing the PGM and at the same time apply transformations to the entries as they are read and write them back to another PGM file. >>> >>> Such problems are the main motivation for iteratees, conduits, pipes, >>> etc. Every such library contains procedures for doing exactly what you >>> want. >>> > > It would be really awesome, though, if it were possible to use a > parser written in Parsec with this, in the spirit of avoiding code > rewriting and enhancing expressivity and abstraction. The polyparse library on Hackage is another parser combinator framework that allows lazy incremental parsing. http://hackage.haskell.org/package/polyparse A PDF paper/tutorial is here: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.118.1754&rep=rep1&type=pdf Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] COBOL-85 parser, anyone?
Ralf Laemmel would probably be the world's foremost expert in parsing and analysing Cobol using functional languages. Try contacting him directly at uni-koblenz.de Some of his relevant papers: http://homepages.cwi.nl/~ralf/padl03/ http://homepages.cwi.nl/~ralf/ctp/ On 20 Jul 2012, at 10:08, Richard O'Keefe wrote: > Does anyone have a parser for COBOL-85 written in Haskell, > or written using some freely available tool that communicates > easily with Haskell? > > I don't need it _yet_, but I'm talking with someone who is > trying to get access to a real legacy site with a bunch of, > well, basically COBOL 85, but there are extensions... > We're not talking about transformation at this stage, just > analysis. > > I could probably hack up the extensions given a place to start. > > I've found some papers and more dead links than I care for > and lots of mention of ASF+SDF which is apparently superseded > by Rascal. > > > > ___ > Haskell-Cafe mailing list > Haskell-Cafe@haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Performance with do notation, mwc-random and unboxed vector
On 11 Jun 2012, at 10:38, Dmitry Dzhus wrote: > main = do > g <- create > e' <- VU.replicateM count $ standard g > return () In all likelhood, ghc is spotting that the value e' is not used, and that there are no side-effects, so it does not do anything at runtime. If you expand the action argument to replicateM, such that it uses do-notation instead, perhaps ghc can no longer prove the lack of side-effects, and so actually runs the computation before throwing away its result. When writing toy benchmarks in a lazy language, it is always important to understand to what extent your program _uses_ the data from a generator, or you are bound to get misleading performance measurements. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Generalizing (++) for monoids instead of using (<>)
On 4 May 2012, at 10:02, Alberto G. Corona wrote: > Restrict (++) String -> String -> String > > that locally would restrict the type within the module. import qualified Prelude import Prelude hiding ((++)) (++) :: String -> String -> String (++) = Prelude.(++) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Impact of "try" on Parsec performance
On 3 Mar 2012, at 04:30, Omari Norman wrote: > > On the other hand, I notice that attoparsec and polyparse backtrack by > default, and attoparsec claims to be faster than Parsec (I can't remember if > polyparse makes this claim). In my benchmarks, polyparse has about the same performance as Parsec, when using the monadic style (possibly a very tiny bit faster). But polyparse is hugely, asymptotically, faster than Parsec when your parser is written in applicative style, your input text is large, and you consume the parse results lazily. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution
On 8/02/2012, at 14:16, Steve Horne wrote: > > I haven't given a lot of thought to updates. > I very much fail to see the point of replacing prefix function application with postfix dots, merely for field selection. There are already some imperfect, but adequate, solutions to the problem of global uniqueness of field names. But you now have mentioned what is really bothering me about this discussion: record updates are simply the most painful and least beautiful part of the Haskell syntax. Their verbosity is astonishing compared to the careful tenseness of every other language construct. If we could spend some effort on designing a decent notation for field updates, I think it would be altogether more likely to garner support than fiddling with dots. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] FP activities and researchers in Warwickshire
"Fun in the afternoon", a termly gathering of UK FP people, will be in Oxford on 28th Feb. http://sneezy.cs.nott.ac.uk/fun/ On 7/02/2012, at 18:32, Ivan Perez wrote: > Hello, > I recently moved to Kenilworth, Warwickshire, UK, and I'd like > to know if there are meetings, talks, or any FP-related activities > going on around here. I contacted somebody at Warwick > University but, from what I understood, their Formal Methods > group doesn't exist as such any longer and they don't carry out > any coordinated, regular events regarding FP anymore. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] strict version of Haskell - does it exist?
On 29 Jan 2012, at 22:25, Ertugrul Söylemez wrote: > A strict-by-default Haskell comes with the > implication that you can throw away most of the libraries, including the > base library. So yes, a strict-by-default Haskell is very well > possible, but the question is whether you actually want that. I > wouldn't, because a lot of my code relies on the standard semantics. At work, we have a strict version of Haskell, and indeed we do not use the standard libraries, but have built up our own versions of the ones we use. However, our compiler is smart enough to transform and optimise the source code *as if* it were non-strict: it is only at runtime that things are evaluated strictly. This means that, in general, you can rely on the standard semantics to a surprisingly large extent. For instance, maybe (error "foo") lines (Just "hello\nworld") will succeed without calling error, just like in Haskell. Even if the final argument is supplied only at runtime, not statically, it will still do the right thing. However, the downside of being strict at runtime is frequently poorer performance. Stack overflows are common if you use explicit recursion: it is better to use higher-order functions (map, fold, until) that are implemented at a lower level in C (i.e. not directly using explicit recursion themselves). This is a good thing of course - thinking of data structures in the aggregate, rather than piecemeal. However, bulk operations do transform the entire data structure, not merely the fragments that are needed for the onward computation, so it can often be a net performance loss. The standard lazy computational paradigm of generate-and-test is therefore hideously expensive, on occasion. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Can't install hspec
On 23 Jan 2012, at 07:01, Erik de Castro Lopo wrote: >/tmp/hspec-0.9.04062/hspec-0.9.0/Setup.lhs:2:10: >Could not find module `System' >It is a member of the hidden package `haskell98-2.0.0.0'. In ghc-7.2, you cannot use the haskell98 package in conjunction with the base package. The simplest solution is the replace the "import System" with the appropriate replacement module in base: most probably System.Environment, System.Exit, or similar. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [C][enums][newbie] What is natural Haskell representation of such enum?
> 2012/1/22 Данило Глинський > What is natural Haskell representation of such enum? > > enum TypeMask > { >UNIT, >GAMEOBJECT, > >CREATURE_OR_GAMEOBJECT = UNIT | GAMEOBJECT > }; I don't think that definition makes any sense in C, because UNIT is 0, so UNIT | GAMEOBJECT == GAMEOBJECT == 1 Nevertheless, in Haskell something vaguely similar might be: data TypeMask = UNIT | GAMEOBJECT | CREATURE_OR_GAMEOBJECT > // 1-byte flaged enum > enum TypeMask > { >// ... >UNIT= 0x0004, >GAMEOBJECT = 0x0008, >// ... > >CREATURE_OR_GAMEOBJECT = UNIT | GAMEOBJECT >WORLDOBJECT = UNIT | PLAYER | GAMEOBJECT | DYNAMICOBJECT | CORPSE >// ... even more enum combos ... > }; import Data.Bits data TypeMask = UNIT | GAMEOBJECT | CREATURE_OR_GAMEOBJECT | WORLDOBJECT instance Enum TypeMask where fromEnum UNIT = 0x4 fromEnum GAMEOBJECT = 0x8 fromEnum CREATURE_OR_GAMEOBJECT = fromEnum UNIT .|. fromEnum GAMEOBJECT fromEnum WORLDOBJECT = fromEnum UNIT .|. fromEnum PLAYER .|. fromEnum GAMEOBJECT .|. fromEnum DYNAMICOBJECT .|. fromEnum CORPSE toEnum 0x4 = UNIT toEnum 0x8 = GAMEOBJECT toEnum _ = error "unspecified enumeration value of type TypeMask" isCreatureOrGameObject :: Int -> Bool isCreatureOrGameObject x = (x .|. fromEnum CREATURE_OR_GAMEOBJECT) /= 0 isWorldObject :: Int -> Bool isWorldObject x = (x .|. fromEnum WORLDOBJECT) /= 0 -- But fundamentally, this is not an idiomatic Haskell way of doing things. -- The other posts in this thread have shown more Haskell-ish translations. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Tracing Prelude.read exceptions
I suggest switching from 'read' to a real parser that can give you proper error messages. I use Text.Parse from the polyparse package, which is designed to parse back exactly the format produced by derived Show instances. To derive the Parse class from your datatypes, the tool DRiFT is handy. 'runParser parse' will give you Either String a, where the string contains any error message. Regards, Malcolm On 11/12/2011, at 18:19, dokondr wrote: > Hi, > I got quite used to a sequence providing simple data persistence : > 1) Store my data to a file: > writeFile fileName (show someData) > > 2) Some time later read this data back: > line <- readFile fileName > let someData = read line :: SomeDataType > > Having this done hundreds of times I now got stuck with step 2) trying to > read moderately complex structure back. I get read exception in run-time: > fromList *** Exception: Prelude.read: no parse > > I have checked and rechecked my types, data files, etc. - and still no idea. > > So my question: > Is there any way to trace Prelude.read exceptions to see exactly on what data > element read fails in run-time? > > Thanks! > > > > > > > > ___ > Haskell-Cafe mailing list > Haskell-Cafe@haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] HaXml 1.13 -> 1.22 upgrade
The extra parameter "i" is for "information" attached to each node of the tree. As you have correctly guessed, the parser fills in this field with positional information relating to the original source document, which is useful for instance if you are validating or checking the original document. When building new parts of a document, it is perfectly fine to attach "noPos". You can alternatively replace all of the informational items in the tree, with for instance "fmap (const ())" if you don't care about them. The information fields are useful for other purposes though, e.g. to hold the relevant xmlns namespace for subtrees; or to distinguish added/removed/modified subtrees in a diff-like viewer. Regards, Malcolm On 11/12/2011, at 14:56, Michael Orlitzky wrote: > On 12/11/2011 01:36 AM, Antoine Latter wrote: >> >> It looks like the function 'xmlParse' returns a value of type >> 'Document Posn', according to the API docs. I'm guessing the 'Posn' >> value is used to annotate the position in the source document a >> particular piece of XML came from, so you can report errors better. >> >> Since the pretty-printing functions ignore it, you can replace it with >> whatever you want, even with a value of a different type if you have a >> need to annotate the tree. > > Thanks, I was able to get it working after a little sleep/coffee. > > The migration guide says to replace all of the 'i' with () if you don't care > about them, so I tried that, but it doesn't work in this case: the two 'i' in > (CElem (Element i) i) have to match. > > The only way I see to construct a Posn is with noPos, so I stuck that in > there. It's probably not correct, but it compiles and runs, so it's correct =) > > ___ > Haskell-Cafe mailing list > Haskell-Cafe@haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Does anyone maintain trac.haskell.org?
>> The community Trac hosting server isn't sending email, which Trac requires. >> >> I've submitted several tickets to supp...@community.haskell.org but >> gotten no response. >> >> Does anyone maintain that server anymore? > > Had the same problem in July. Raised a ticket etc. I don't think there > is anyone actually responsible for the trac server. Indeed, the community server (including trac) is administered on a volunteer best-effort basis. Unfortunately, we do not have sufficient trac-admin expertise on the volunteer team in order to know what is wrong here, or to fix it. If there is a trac expert out there who could help us diagnose and fix the problem, we would be glad of their aid. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] GHCi and Cairo on Windows
On Windows, it is necessary to add to your PATH variable the bin directory where the gtk+ DLL lives. Note, this is the C DLL, not the Haskell one produced by gtk2hs. For instance, on my machine the relevant directory is C:\workspace\ext\gtk+-2.20\bin. It is quite likely different on yours. On 5/12/2011, at 8:23, smelt...@eecs.oregonstate.edu wrote: > In effort to keep my work cross-platform, I am trying to get GHCi and Cairo > working together nicely on Windows (as a back-end to Diagrams, if it > matters.) When loading the library in GHCi I get the following error: > > Loading package cairo-0.12.2 ... linking ... ghc: unable to load package > `cairo-0.12.2' > > I am able to build, link, and execute successfully using 'ghc --make' so I > know the libraries are installed correctly. I'm using GHC as packaged in the > most recent version of the Haskell Platform and the most recent all-in-one > Windows bundle of GTK. > > I've seen quite similar errors reported months ago with no apparent solution. > > Any help would be appreciated! > > --Karl > > ___ > Haskell-Cafe mailing list > Haskell-Cafe@haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hackage down!
And, amusingly, http://downforeveryoneorjustme.com/ is also down, having exceeded its Google App Engine quota. [ But the similarly named .org site still works, and confirms that hackage is down. ] Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Superset of Haddock and Markdown
On 20 Nov 2011, at 22:20, Ivan Lazar Miljenovic wrote: > On 21 November 2011 03:19, David Fox wrote: >> On Fri, Nov 18, 2011 at 1:10 AM, Ertugrul Soeylemez wrote: >>> Ivan Lazar Miljenovic wrote: >>> Wasn't there talk at one stage of integrating pandoc into haddock? >>> >>> I wouldn't mind Haddock depending on Pandoc, at least optionally >>> (-fmarkdown-comments). Taking this to its conclusion you could easily >>> have syntax-highlighted code examples in Haddock documentations and >>> allow alternative output formats. >> >> I'm not sure the pandoc license (GPL) is compatible with the GHC license. > > Do you mean because GHC ships with a Haddock binary? GHC also ships with a GPL'd gcc binary, on at least some platforms. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Package documentation complaints -- and a suggestion
> The problem isn't social pressure to be stable, it's the ambiguity of what > "stable" means. If Hackage 2 institutes a policy whereby things claiming to > be stable are treated better, then "stable" is likely to become the new > "experimental". I'd say, rather than rely on social agreement on what terms mean, let's just collect lots of automated metrics, and present them as extra information on the hackage pages. At work, we have all modules scored by hlint metrics, and doclint metrics. (Doclint complains about modules without a module header comment, and type signatures without haddock comments.) We count infractions and have a "top ten" hall-of-shame, as well as placing the scores in the module documentation itself. We also have a "fingerprint" for every release (basically the API type signatures), and the size of fingerprint-diffs between releases is a rough measure of API-churn. Some of these measures are designed to place social pressure on authors to improve their code/documentation, but they have a dual role in allowing users to get a feel for the quality of the code they are using, without imposing any external hierarchy on which metrics are more important in any given situation. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Fwd: how to increase the stack size
>> when I am running the program in my terminal on ubuntu its showing me >> GHC stack-space overflow: current limit is 536870912 bytes. >> Use the `-K' option to increase it. >> how can i increase the stack sizePlz help me out > > > Others have explained how to "Use the `-K' option", However, those explanations did not take account of the fact that ghc now ignores +RTS options by default at runtime. You need to compile the program with an extra option -rtsopts, if you want the +RTS options to work at runtime. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Is it possible to represent such polymorphism?
> Although I still wonder why something so simple in C++ is actually more > verbose and requires less known features in Haskell...What was the design > intent to disallow simple overloading? The "simple" C++ overloading you want to add to Haskell, is in fact rather semantically complex, and it leads to undecidability of the type system. The inherent formal complexity here suggests that this form of overloading is highly unlikely to be the correct solution in practice to the problem you are trying to solve. And even if it were a technically correct solution, it is likely to be unmaintainable and fragile to code changes. There is a high probability that a more-formally-tractable solution exists, and that using it will improve your understanding of the problem at hand, and make your code more regular and robust to change. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] mapM is supralinear?
On 27 Sep 2011, at 11:23, Arseniy Alekseyev wrote: > Malcolm, one should amortize the cost of the collection over the > amount of free space allocated rather than recovered They are the same thing. You can only allocate from the space that has been recovered. It is true that generational GC has a nursery area of largely constant size, which is always used for fresh allocation, but that is usually considered an optimisation (albeit a considerable one), which does not fundamentally change the underlying asymptotic costs of the major collections. When you have large heap residency, the proportion of time spent in GC increases. > (there are cases > when no space is recovered, would you call the GC cost infinite > then?). Indeed I would. When that happens, usually the program aborts without completing its computation, so the computation is infinitely delayed. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] mapM is supralinear?
On 26 Sep 2011, at 23:14, Arseniy Alekseyev wrote: > Garbage collection takes amortized O(1) per allocation, doesn't it? No. For Mark-Sweep GC, the cost is proportional to (H+R) / (H-R) where H is the total heap size R is the reachable (i.e. live) heap This formula amortises the cost of a collection over the amount of free space recovered. For two-space copying collection, the cost is proportional to R / ((H/2)-R) In both cases, as R approaches H (or H/2), the cost of GC becomes rather large. So in essence, the more live data you have, the more GC will cost. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Categorized Weaknesses from the State of Haskell 2011 Survey
On 13 Sep 2011, at 18:59, Michael Orlitzky wrote: >> Malcolm Wallace and Colin Runciman's ICFP99 paper functioned well as a >> tutorial for HaXml when I used it - maybe it is a bit out of date now? >> HaXml is hardly a dire case. > > The paper is out-of-date, so it's worse than useless: you'll waste your > time figuring out that it's wrong, and you still won't know how to do > anything. > > There's not one single example anywhere that just shows you how to read > or write a damned XML file. > If there were anything approaching a physical manifestation of HaXml, I > would've strangled it. I am the first to admit that HaXml's documentation is not as good as it could be, and I am sorry that you have had a bad experience. One thing I am puzzled about, is just how extremely difficult it must be, to click on "Detailed documentation of the HaXml APIs" from the HaXml homepage, look for a moment until you see "Text.XML.HaXml.Parse" in the list of modules, click on it, and find, right at the top of the page, a function that parses a String into an XML document tree. It is absolutely true that finding the reverse conversion (XML tree to String) is more obscure, being either the two-stage process of first using "Text.XML.HaXml.Pretty" to convert to a Doc, then "Text.PrettyPrint.HughesPJ" to render to a String; or alternatively the one-shot conversion in "Text.XML.HaXml.Verbatim". Neither module name is as clear as it should be for a beginner, but I can't think of better ones. Plus, it requires some knowledge of the ecosystem, for instance that pretty-printing is a common technique for producing textual output. In fact, my wish as a library author would be: please tell me what you, as a beginner to this library, would like to do with it when you first pick it up? Then perhaps I could write a tutorial that answers the questions people actually ask, and tells them how to get the stuff done that they want to do. I have tried writing documentation, but it seems that people do not know how to find, or use it. Navigating an API you do not know is hard. I'd like to signpost it better. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] partial inheritance
On 19/07/2011, at 0:09, Patrick Browne wrote: > instance Bird Emperor where > -- No fly method > walk x y = y > > instance Penguin Emperor where > -- How can I override the walk method in the instance Penguin? > -- walk x y = x Why would you want to override the walk method for Emperor? It already has one due to being a Bird. How could you possibly distinguish an Emperor walking as a Bird from an Emperor walking as a Penguin? Why would it be desirable? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Inconsistent trailing comma in export list and record syntax
> That just shifts the problem, I think? Now you can no longer comment out the > first line. If you are using to-end-of-line comments with --, then the likelihood of noticing a leading ( or { on the line being commented, is much greater than the likelihood of noticing a trailing comma on the end of the line _before_ the one you are commenting. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Data.Time
On 2 Jul 2011, at 22:13, Yitzchak Gale wrote: > [1]http://hackage.haskell.org/package/timezone-series > [2]http://hackage.haskell.org/package/timezone-olson I'd just like to add that these timezone packages are fantastic. They are extremely useful if you need accurate conversion between wall-clock times in different locations of the world, at arbitrary dates in the past or future, taking account of the differing moments at which daylight savings times take effect and so on. [At least one financial institution is now using them to avoid losing money that might otherwise happen due to confusion over the exact time of expiry of contracts.] Thanks Yitz. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Patterns for processing large but finite streams
Sure you can. runningAverage :: Int -> [Double] -> [Double] runningAverage n xs = let chunk = take n xs in (sum chunk / length chunk) : runningAverage (tail xs) Lazy lists are absolutely ideal for this purpose. Regards, Malcolm On 1 Jul 2011, at 07:33, Eugene Kirpichov wrote: > Plain old lazy lists do not allow me to combine multiple concurrent > computations, e.g. I cannot define average from sum and length. > > 2011/7/1 Heinrich Apfelmus : >> Eugene Kirpichov wrote: >>> >>> I'm rewriting timeplot to avoid holding the whole input in memory, and >>> naturally a problem arises: >>> >>> How to represent large but finite streams and functions that process >>> them, returning other streams or some kinds of aggregate values? >>> >>> Examples: >>> * Adjacent differences of a stream of numbers >>> * Given a stream of numbers with times, split it into buckets by time >>> of given width and produce a stream of (bucket, 50%,75% and 90% >>> quantiles in this bucket) >>> * Sum a stream of numbers >>> >>> Is this, perhaps, what comonads are for? Or iteratees? >> >> Plain old lazy lists? >> >> >> Best regards, >> Heinrich Apfelmus >> >> -- >> http://apfelmus.nfshost.com >> >> >> ___ >> Haskell-Cafe mailing list >> Haskell-Cafe@haskell.org >> http://www.haskell.org/mailman/listinfo/haskell-cafe >> > > > > -- > Eugene Kirpichov > Principal Engineer, Mirantis Inc. http://www.mirantis.com/ > Editor, http://fprog.ru/ > > ___ > Haskell-Cafe mailing list > Haskell-Cafe@haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Data.Time
On 26 Jun 2011, at 01:53, Tony Morris wrote: > Having only had a flirt with Data.Time previously, I assumed > it would be robust like many other haskell libraries. If, by lack of robustness, you mean that you get runtime errors, then consider them bugs, and file them with the author/maintainer accordingly. If you mean something else, then being more specific might be useful. I know that the first time I looked seriously at Data.Time it seemed rather byzantine and labyrinthine. So many types! So few direct conversions between them! But when you think more closely about the domain, you realise that notions of time are not simple at all, and have varied widely over history, and the complexity of Data.Time only reflects the complexity of the domain. The old-time package is still available, and has a much simplified approach to time (which is evidently wrong in many places), but may better suit the needs of applications that only care to be approximate. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Alex Lexer Performance Issues
On 22 Jun 2011, at 15:53, Tristan Ravitch wrote: > On Wed, Jun 22, 2011 at 07:48:40AM +0100, Stephen Tetley wrote: >> How fast is good old String rather than ByteString? >> >> For lexing, String is a good fit (cheap deconstruction at the head / >> front). For your particular case, maybe it loses due to the large file >> size, maybe it doesn't... > > I gave it a shot and the percentages in the profile are approximately > the same (and peak memory usage was about double). I might end up > having to parse the original binary format instead of the text format. There is an old folklore that lexing is usually the most expensive phase of any compiler-like traversal. 50% of time and space expended on lexing was pretty common twenty years ago. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Why aren't there anonymous sum types in Haskell?
On 21 Jun 2011, at 20:53, Elliot Stern wrote: > A tuple is basically an anonymous product type. It's convenient to not have > to spend the time making a named product type, because product types are so > obviously useful. > > Is there any reason why Haskell doesn't have anonymous sum types? If there > isn't some theoretical problem, is there any practical reason why they > haven't been implemented? The Either type is the nearest Haskell comes to having anonymous sum types. If you are bothered because Either has a name and constructors, it does not take long before you realise that (,) has a name and a constructor too. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell *interpreter* on iPad? (Scheme and Ocaml are there)
On 18 Jun 2011, at 20:19, Jack Henahan wrote: > but the dev would either be forced into Hugs, or they'd have to implement a > more portable GHC. Does such a thing exist already? Just as a point of interest, the original nhc compiler was original written for an ARM architecture machine (Acorn Archimedes) with 2Mb of memory. Its successor, nhc98, can be bootstrapped from C sources, and always aimed to be as portable as possible. The project is largely unmaintained now, so there is likely to be some bitrot, but it would probably work OK with some effort. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] location of IEEE Viz code ?
> http://hackage.haskell.org/trac/PolyFunViz/wiki/IEEEVisCode > > talks about the code being available through darcs but I can't seem to put my > hands on the http address I would need to pull the code. > > This is all relating to the paper, "Huge Data but Small Programs: > Visualization Design via Multiple Embedded DSLs". http://www.cs.york.ac.uk/fp/darcs/polyfunviz/ I can't recall the exact state of the repository; it is likely that some of it may no longer build with newer versions of ghc and/or OpenGL. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] SIGPLAN Programming Languages Software Award
> Curious observation: > >Object languageType language >OO (C++)functional >functional (Haskell)logical > > It makes me wonder what comes next... To be more accurate, it was Functional Dependencies that introduced a logic programming language to the type level in Haskell. Type Families are an explicit attempt to use instead a functional language at the type level to mirror the functional language at the value level. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Type Constraints on Data Constructors
> data Bar f a = Foo f => Bar {bar :: f a} The class context on the data constructor buys you nothing extra in terms of expressivity in the language. All it does is force you to repeat the context on every function that uses the datatype. For this reason, the language committee has decided that the feature will be removed in the next revision of Haskell. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] SIGPLAN Programming Languages Software Award
> More seriously, the influence of Haskell over F# (and even Python) is > undoubted, but do you really think Haskell influenced Java Generics? (IMHO > they were more inspired from C++ templates) > (That is a question, not an assertion). Phil Wadler had a hand in designing both Haskell and Java Generics I believe. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How to install GhC on a Mac without registering?
On 6 Jun 2011, at 13:49, Lyndon Maydwell wrote: > I would be fantastic if XCode wasn't a dependency. ... > > Not to detract at all from the work of the wonderful GHC and Haskell > Platform contributors in any way. For me it would just make it that > much easier to convince mac-using friends to give Haskell a try. The ghc team already bundle a copy of gcc in their Windows distribution, precisely because it can be fiddly to get a working copy of gcc for that platform otherwise. I wonder if they would consider the possibility of shipping gcc on Mac too? (There may be good reasons not to do that, but let's have the discussion.) Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How to install GhC on a Mac without registering?
> it won't be a pleasant choice to fork over a good chunk of money to > Apple for the use of free software that they didn't develop. Whilst I acknowledge your painful situation, I'd like to rebut the idea that Apple stole someone else's free software and are selling it on. In fact, Apple developed, or paid for development of, quite a chunk of gcc: the objective-C front end and LLVM back end at least. In paying for XCode 4, you are getting a lot of proprietary code in addition to gcc. However, XCode 3 remains free to download, if you are a registered Apple developer. Registration is completely free of charge: http://developer.apple.com/programs/register/ You may find other links that make registration appear to cost $99 - but those are for the "iOS" or "Mac" developer programs, not the "Apple" developer program. The ones that charge money enable the right to publish software in the App Stores, which you do not need. I think you can download the free version of the XCode 3 installer, burn it to a DVD, and pass the DVD round your students. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Fwd: Abnormal behaviors when Using ghci
On 5/06/2011, at 13:12, 吴兴博 wrote: > 1) I'm using Haskell platform 2011.2 on windows (7). Every several > days, ghci will crash with no messages. even when I'm just typing with > text buffer, without an 'enter'. I got nothing after the crash, not > even an exception code, don't even mention the core-dump. Do you have a "single sign-on" application installed (possibly TAM ESSO)? Weird though it sounds, we have experience of this Windows app randomly killing other processes, such that they just disappear with no apparent cause. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Matplotlib analog for Haskell?
> I tried gnuplot: > > Demo.hs:25:18: > Could not find module `Paths_gnuplot': > Use -v to see a list of the files searched for. > Failed, modules loaded: none. > Prelude Graphics.Gnuplot.Simple> > > > Where to get `Paths_gnuplot': module? $ cd gnuplot-0.4.2 $ cabal install # this generates the Paths_gnuplot module $ ghci ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Comment Syntax
>> -- followed by a symbol does not start a comment, thus for example, haddock >> declarations must begin with -- |, and not --|. >> >> What might --| mean, if not a comment? It doesn't seem possible to define it >> as an operator. > > GHCi, at least, allows it. > > Prelude> let (--|) = (+) > Prelude> 1 --| 2 > 3 I believe the motivating example that persuaded the Language Committee to allow these symbols was --> which is not of course used anywhere in the standard libraries, but is an extremely nice symbol to have available in user code. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] What's the advantage of writing Haskell this way?
instance (Monad m, MonadPlus m) => Monoid (Stream m a) where > mempty = Chunks mempty > mappend (Chunks xs) (Chunks ys) = Chunks (xs `mappend` ys) > mappend _ _ = EOF > > > Iteratee.hs:28:25: > No instance for (Monoid (m a)) > arising from a use of `mempty' > There is a clue in the first part of the error message. Add the required instance as part of the predicate: instance (Monad m, MonadPlus m, Monoid (m a)) => Monoid (Stream m a) where ... ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Maybe Int] sans Nothings
On 23 May 2011, at 17:20, michael rice wrote: > What's the best way to end up with a list composed of only the Just values, > no Nothings? Alternatively, [ x | Just x <- originals ] It also occurs to me that perhaps you still want the Just constructors. [ Just x | Just x <- originals ] [ x | x@(Just _) <- originals ] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Maybe Int] sans Nothings
On 23 May 2011, at 17:20, michael rice wrote: > What's the best way to end up with a list composed of only the Just values, > no Nothings? Go to haskell.org/hoogle Type in "[Maybe a] -> [a]" Click on first result. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Using cmake with haskell
> On 5/14/11 6:12 PM, Nathan Howell wrote: >> Waf supports parallel builds and works with GHC without too much trouble. I'm surprised no-one has yet mentioned Shake, a build tool/library written in Haskell. It does parallel builds, multi-language working, accurate dependencies, etc etc. I use it every day at work, and it is robust, scalable, and relatively easy to use. Introductory video here: http://vimeo.com/15465133 Open source implementation here: https://github.com/batterseapower/openshake Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] cannot install base-4.3.1.0 package
On 15 May 2011, at 15:35, Immanuel Normann wrote: > Why is it so complicated to install the base package? You cannot upgrade the base package that comes with ghc. It's a bad design, but there we go. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] no time profiling on my MacBookPro8,1
On 6 May 2011, at 23:07, Nicolas Frisby wrote: > all of the %time cells in the generated Main.prof file are 0.0, as is > the total time count (0.00 secs and 0 ticks). The %alloc cells seem > normal. See http://hackage.haskell.org/trac/ghc/ticket/5137 Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How often is the planet updated?
On 28 Apr 2011, at 11:26, Magnus Therning wrote: > I see that Planet Haskell hasn't been updated since April 26. Is > something wrong with it, or does it really not update more often than > that? Just to note: there was a configuration problem with planet, which has now been sorted out. The usual schedule of updates has resumed. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Python is lazier than Haskell
On 29 Apr 2011, at 05:38, Ben Lippmeier wrote: > Laziness at the value level causes space leaks, This is well-worn folklore, but a bit misleading. Most of my recent space leaks have been caused by excessive strictness. Space leaks occur in all kinds of programs and languages, and I am not convinced there is a strong correlation between laziness and leakiness. If anything, I think there is observation bias: lazy programmers have good tools for identifying, finding, and removing leaks. Others do not. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] converting prefixes of CString <-> String
On 26 Apr 2011, at 13:31, Eric Stansifer wrote: >>> Let 'c2h' convert CStrings to Haskell Strings, and 'h2c' convert >>> Haskell Strings to CStrings. (If I understand correctly, c2h . h2c >>> === id, but h2c . c2h is not the identity on all inputs; >> >> That is correct. CStrings are 8-bits, and Haskell Strings are 32-bits. >> Converting from Haskell to C loses information, unless you use a multi-byte >> encoding on the C side (for instance, UTF8). > > So actually I am incorrect, and h2c . c2h is the identity but c2h . h2c is > not? Ah, my bad. In reading the composition from right to left, I inadvertently read h2c and c2h from right to left as well! So, starting from C, converting to Haskell, and back to C is the identity, yes. Starting from Haskell, no. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] A small Darcs anomoly
On 25 Apr 2011, at 11:13, Andrew Coppin wrote: > On 24/04/2011 06:33 PM, Jason Dagit wrote: >> >> This is because of a deliberate choice that was made by David Roundy. >> In darcs, you never have multiple branches within a single darcs >> repository directory tree. > > Yes, this seems clear. I'm just wondering whether or not it's the best design > choice. It seems to me to be a considerable insight. Branches and repositories are the same thing. There is no need for two separate concepts. The main reason other VCSes have two concepts is because one of them is often more efficiently implemented (internally) than the other. But that's silly - how much better to abstract over the mental clutter, and let the implementation decide how its internals look! So in darcs, two repositories on the same machine share the same files (like a branch), but if they are on different machines, they have separate copies of the files. The difference is a detail that you really don't need to know or care about. > It does mean that you duplicate information. You have [nearly] the same set > of patches stored twice, No, if on the same machine, the patches only appear once, it is just the index that duplicates some information (I think). In fact just as if it were a branch in another VCS. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] converting prefixes of CString <-> String
On 25 Apr 2011, at 08:16, Eric Stansifer wrote: > Let 'c2h' convert CStrings to Haskell Strings, and 'h2c' convert > Haskell Strings to CStrings. (If I understand correctly, c2h . h2c > === id, but h2c . c2h is not the identity on all inputs; That is correct. CStrings are 8-bits, and Haskell Strings are 32-bits. Converting from Haskell to C loses information, unless you use a multi-byte encoding on the C side (for instance, UTF8). > or perhaps c2h is not defined for all CStrings. Rather, h2c is not necessarily well-defined for all Haskell Strings. In particular, the marshalling functions in Foreign.C.String simply truncate any character larger than one byte, to its lowest byte. I suggest you look at the utf8-string package, for instance Codec.Binary.UTF8.String.{encode,decode}, which convert Haskell strings to/from a list of Word8, which can then be transferred via the FFI to wherever you like. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: unordered-containers - a new, faster hashing-based containers library
On 22 Feb 2011, at 22:21, Bryan O'Sullivan wrote: for some code that's (b) faster than anything else currently available I look forward to seeing some benchmarks against libraries other than containers, such as AVL trees, bytestring-trie, hamtmap, list-trie, etc. Good comparisons of different API-usage patterns are hard to come by. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] A simple attoparsec question
On 1 Mar 2011, at 21:58, Evan Laforge wrote: parseConstant = Reference <$> try parseLocLabel <|> PlainNum <$> decimal <|> char '#' *> fmap PlainNum hexadecimal <|> char '\'' *> (CharLit <$> notChar '\n') <* char '\'' <|> try $ (char '"' *> (StringLit . B.pack <$> manyTill (notChar '\n') (char '"'))) "constant" The problem is, that attoparsec just silently fails on this kind of strings and tries other parsers afterwards, which leads to strange results. Is there a way to force the whole parser to fail, even if there's an alternative parser afterwards? I _think_ what the original poster is worried about is that, having consumed an initial portion of a constant, e.g. the leading # or ' or ", if the input does not complete the token sequence in a valid way, then the other alternatives are tried anyway (and hopelessly). This can lead to very poor error messages. The technique advocated by the polyparse library is to explicitly annotate the knowledge that when a certain sequence has been seen already, then no other alternative can possibly match. The combinator is called 'commit'. This locates the errors much more precisely. For instance, (in some hybrid of polyparse/attoparsec combinators) parseConstant = Reference <$> try parseLocLabel <|> PlainNum <$> decimal <|> char '#' *> commit (fmap PlainNum hexadecimal) <|> char '\'' *> commit ((CharLit <$> notChar '\n') <* char '\'') <|> char '"' *> commit ((StringLit . B.pack <$> manyTill (notChar '\n') (char '"'))) "constant" Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Where to put a library
On 2 Mar 2011, at 22:38, Sebastian Fischer wrote: You could place the parsers under Text.TSPLIB Text.SATLIB Text Some other suggestions might be Codec.TSP Codec.SAT or FileFormat.TSP FileFormat.SAT Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Cabal-devel] Cabal && license combinations
On 10 Feb 2011, at 17:38, Antoine Latter wrote: So no, the instant of compilation is not when the transitive dependencies kick in, it is the publication of compiled binaries, which in my mind is a pretty specialized case. This is possibly the most important point to emphasise, of which many people seem unaware. If you expect to receive an open source application from someone else as source code, and build it yourself, then almost by the definition of Open Source, you are already in compliance with all licences. It is only those, comparatively few, people who build binaries and give them to other people *as binaries* who even need to think about licensing issues. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Cabal && license combinations
It seems then that a package should be the least restrictive combination of all the licenses in all the contained modules. Omit the words "least restrictive" and I think you are correct. To combine licences, just aggregate them. There is no lattice of subsumption; no "more" or "less" restrictive ordering. It's simple: you must obey all of them. Some aggregations introduce a contradiction of terms, so you cannot legally aggregate those modules without breaking some term. But if the terms of the aggregated licences are compatible rather than contradictory, then all is good. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] coding style vs. foreign interfaces
On 7 Feb 2011, at 03:10, Donn Cave wrote: I just noticed a handful of spelling errors, in a package that isn't all that obscure. Enums from a C interface - data AlarmingNews = -- ALARM_STUFF_WENT_WRONG AlarmStufWentWrong | ... FWIW, if you generate these bindings with a tool (e.g. hsc2hs, c2hs), then there are standard machine-generated translations, which follow simple rules (so are easy to guess) and being mechanical, are always spelled as expected. For example aLARM_STUFF_WENT_WRONG for a constant Haskell Int value corresponding to the C enum or CPP constant. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Delivery to haskell-platf...@projects.haskell.org
but I assumed that had already been resolved and that I was seeing another failure, but apparently not :-( Hopefully it'll be resolved soon. If anyone with sysadmin experience on Debian can suggest why "telnet localhost 25" hangs on community.haskell.org, even though both exim and clamd are running, we would be grateful for some insight. We just don't know what is wrong, so fixing it is not likely to be easy. Ideas to haskell-infrastruct...@community.galois.com please. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell time line?
On 20 Jan 2011, at 14:40, michael rice wrote: Maybe a better question would be which of these features, *weren't* present at first launch? The only obvious feature that was missing in 1990 was monads (at least built-in support for them). Do-notation for instance was first introduced by Mark Jones in Gofer (which later became Hugs), in version 2.30, in June 1994. Many of the specific monads (e.g. Software Transactional Memory) necessarily came later as well, for obvious reasons. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Misleading MVar documentation
On 16 Jan 2011, at 03:58, Jan-Willem Maessen wrote: Actually, the first presentation of M-structures is rather older than that. See Barth, Nikhil, and Arvind's FPCA '91 paper: http://portal.acm.org/citation.cfm?id=652538 The original formulation was indeed in terms of "take" and "put", though unconditional read and write primitives were prtty commonly used in Id programs. The take/put view can also usefully be thought of as a 1-element blocking channel. The full spectrum of one-element communication protocols is set out in H R Simpson. The MASCOT method. Software Engineering Journal, 1(3):103– 120, March 1986. The notation was known as Real Time Networks. In this scheme, there are four types of protocol, each useful in different circumstances: blocking read, blocking write: a channel non-blocking read, blocking write: a constant blocking read, non-blocking write: a signal non-blocking read, non-blocking write: a pool The first of these, channel, corresponds to the MVar. The pool corresponds to an IORef. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Odd profiling results
http://www.mega-nerd.com/tmp/ddc-heap-usage-20101231.png We have no particular problem with the 11 peaks (one for each source file) but wonder what the hell is going on in the periods when the memory usage is flat. The peaks I am guessing are largely attributable to parsing the source files. Then, once the source has been converted to an AST, the DDC compiler is presumably doing some analysis before moving on to the next file? I think these are the well-behaved flat bits. These phases do not allocate anything fresh, so they are not creating new data-structures, but perhaps are propagating static information around the AST. Something like a type/effect analysis maybe? Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] GHC ParseTree Module
You will be more likely to get an answer on the ghc-users mailing list (cc'ed). The ghc developers rarely follow -cafe. On 1 Jan 2011, at 20:36, Jane Ren wrote: Hi, Does anyone know what GHC module gets the AST and type info of some source code? This is the GHC module that converts all of Haskell into an AST with a small number of pattern cases, where each AST node is annotated with the Haskell type. Thanks ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell Parse Tree
The haskell-src-exts package? http://hackage.haskell.org/package/haskell-src-exts On 21 Dec 2010, at 09:35, Serguey Zefirov wrote: 2010/12/21 Jane Ren : Does anyone know how to get the parse tree of a piece of Haskell code? Any recommended documentation? ghc as a library? http://www.haskell.org/haskellwiki/GHC/As_a_library ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Fwd: typehash patch for base >= 4 && < 4.4
I sent a patch to fix this to the maintainer for the typehash package (Lennart Augustsson) on the 16th of November, but haven't heard anything back - it is possible that he doesn't read the e-mail address I sent it to, or is no longer interested in maintaining typehash. Lennart applied your patch and uploaded a new version a couple of days ago. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re: Reply-To: Header in Mailinglists
If the mailing list replaced Reply-To header it would required additional effort for responders instead of just pressing reply-to- all. If the list were to add a "Reply-To:" header, but only in the case where one was not already present, that would seem to me to be ideal. (None of the internet polemics against Reply-To that I have seen, have considered this modest suggestion.) In the past, I have carefully used the Reply-To header to direct responses to a particular mailing list of many (e.g. when cross- posting an announcement). Yet because there is a culture of "Reply- To: is bad", and most MUAs do not have a "ReplyToList" option, most respondents end up pushing "Reply to all", which ignores my setting of "Reply-To:", and spams more people than necessary. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Type Directed Name Resolution
On 12 Nov 2010, at 20:21, Andrew Coppin wrote: On 11/11/2010 08:43 PM, Richard O'Keefe wrote: If length, map, and so on had always been part of a Sequence typeclass, people would not now be talking about It's always puzzled me that Haskell's standard containers almost completely lack any way to use them polymorphically. On the contrary, there is the Edison package of containers and algorithms, since at least the late 90's, which has type classes for all of the common operations. It is high quality, and kind-of the "ideal standard" in an academic sort of way, except that almost nobody uses it. In particular, ghc did not use it internally, choosing Data.Map instead, and the legendary suspicion of programmers who refuse to use a alternative library replacing one that already comes with their compiler, means that nobody else did either. Either that, or people find it awkward to deal with the substantial extra hierarchies of type classes. Edison-API and Edison-core are available on hackage by the way. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Type Directed Name Resolution
The point is that refusing something you can have now (though of course it's an open question whether TDNR is something we can "have now") out of fear that it'll prevent you getting something better later is speculative and often backfires. I think we are very far from having TDNR "now". It is really quite complicated to interleave name resolution with type checking in any compiler. So far, we have a design, that's all, no implementation. We also have (several) designs for proper record systems. If the outcome of this discussion is a clamour for better records instead of TDNR, then that would certainly make me happy. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Serialization of (a -> b) and IO a
I'll just note that LLVM is only platform independent to a degree. Or rather, I believe the situation is that it *is* architecture independent, but it doesn't abstract anything else besides the architecture In particular, imagine how you might serialise a Haskell function which is an FFI binding to some external platform-specific library (e.g. Posix, Win32, Gtk+, WPF), such that you could save it on a Windows machine, copy to Linux or Mac, and start it running again. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Haskell on ancient machines (Was: "Haskell is a scripting language inspired by Python.")
I haven't checked how much RAM nhc98 needs for bootstrapping recently, but the Makefile suggests 16Mb of heap + 2Mb of stack is more than sufficient - it could probably manage with less. If it was possible to save resources in these days - why isn't it possible today anymore? Too many Haskell extensions? Too much auxiliary code for interfacing with OS? Increased safety? Simply that more people care about speed than about space. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: "Haskell is a scripting language inspired by Python."
On 5/11/2010, at 8:54 AM, Andrew Coppin wrote: Can you actually run something like Haskell with mere kilobytes of RAM? I recall running Haskell-like programs (compiled by Gofer, the predecessor of Hugs) on a machine with 256Kb of memory, back in the early 1990s. They were smallish programs of course. The interpreter/ RTS was about 50Kb, the bytecode for the program took up a few Kb, and there was about 100Kb of stack and heap combined, so I was not even using the full capacity of the machine. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: "Haskell is a scripting language inspired by Python."
On 4 Nov 2010, at 22:38, Lennart Augustsson wrote: The smallest bootstrapped Haskell compiler is NHC which (I think) runs in a few MB. Originally, it needed to be able to compile itself in the 2Mb available on Niklas's Amiga. Then he got an upgrade to 4Mb, so he started to become less disciplined about keeping things small. :-) I haven't checked how much RAM nhc98 needs for bootstrapping recently, but the Makefile suggests 16Mb of heap + 2Mb of stack is more than sufficient - it could probably manage with less. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] What is simplest extension language to implement?
On 4 Nov 2010, at 17:52, Luke Palmer wrote: On Thu, Nov 4, 2010 at 5:30 AM, Malcolm Wallace > wrote: ehm. I missed something and ghc api is well documented and stable ? There are other ways of adding Haskell as a scripting language - bundling ghc is not necessary. Do tell. Well, our solution is not entirely off-the-shelf, and possibly not to everyone's taste or ability, but we wrote our own Haskell "compiler", and a bunch of auto-generation tools (and FFI magic) that expose the underlying application's APIs (written in both Haskell and C++) as import-able modules into the scripting-Haskell layer. When I say "we", of course I mean Lennart, who may have some previous experience in writing Haskell compilers... But this one is based on many freely available packages like haskell- src-exts and uniplate, so lots of the hard work had already been done for us. And who knows, perhaps one day enough of the other parts of a basic compiler (name resolver, type checker, translator to core) might appear in Hackage to make it easy for anyone to write their own scripting engine. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] What is simplest extension language to implement?
ehm. I missed something and ghc api is well documented and stable ? There are other ways of adding Haskell as a scripting language - bundling ghc is not necessary. It is inacceptable for scripting language, faced to no-programmers. Such languages must be as plain and regular, as possible. We give Haskell as a embedded scripting language to non-programmers, and they love it. They especially like the strong typing, which finds their bugs before they ever get the chance to run their script. The terseness and lack of similarity to other programming languages is another benefit. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] "Haskell is a scripting language inspired by Python."
Did Haskell get significant whitespace from Python - doubtful as Python possibly wasn't visible enough at the time, but you never know. Whitespace is significant in almost every language: foo bar /= foobar. Using indentation for program structuring was introduced by Peter Landin in his ISWIM language (1966), which is where Haskell picked it up from (via Miranda). See http://en.wikipedia.org/wiki/Off-side_rule Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Parsing workflow
- Is this a valid approach? It is possible that your Parsec lexer will need to see the entire input before it delivers any tokens at all to the Happy parser. This might cause a space problem, depending on how large your inputs are likely to be. - What is your workflow on parsing complex data structures? I usually write a lexer by hand, as a little state machine delivering a lazy list of tokens, then pass the tokens as the input to a grammar built from parser combinators. For larger inputs, I use lazy parser combinators to avoid space leaks. - What about performance? Since my project is going to be an interpreted language parsing performance might be interesting aswell. I've read that happy is in general faster than parsec, but what if I combine both of them as I said above? I guess that parsing a simple list of tokens without any nested parser structures would be pretty fast? Parser combinators are rarely a performance bottleneck in my experience. However, relying on parser combinators to do lexing often slows things down too much (too much back-tracking). Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Parsing workflow
On 31 Oct 2010, at 16:15, Nils Schweinsberg wrote: Am 31.10.2010 16:53, schrieb Vo Minh Thu: So you have to either factorize you parsers or use the 'try'. This is exactly what gives me headaches. It's hard to tell where you need try/lookAhead and where you don't need them. And I don't really feel comfortable wrapping everything into try blocks... Have you considered using a different set of parser combinators, a set that is actually composable, and does not require the mysterious "try"? I would recommend polyparse (because I wrote it), but uuparse would also be a fine choice. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] New repo location for the network package
if I could move to darcs and preserve history I would. Search for "git fastexport" and "darcs-fastconvert". Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe