Re: [Haskell-cafe] generate Haskell code from model
On 4/14/07, Steffen Mazanek [EMAIL PROTECTED] wrote: Brian, but don't you think that you have to write a lot of boilerplate code in Haskell? I have never felt I was writing a lot of boilerplate. There are a lot of abstraction mechanisms in Haskell to avoid boilerplate. Second, if Haskell should be more successful in the real world there has to be a way of demonstrating basic ideas of a big program to customers. How would you do this? Everybody knows UML class diagrams, for example. In contrast, nobody knows about termgraphs or lambda *g*. I've never had to show a UML or ER diagram to any business people--usually they want a slideshow that is far simpler and a little prettier. The fact that nobody knows about termgraphs or lambda in your group means that you probably shouldn't be considering Haskell (for the same reason my bosses always asked me to document everything--in case you get hit by a bus). Thank you very much for contributing to the discussion. Please assume, that you have to generate the code from a model. Further assume, that you have no choice and are not allowed to discuss the sense of this approach :-) How should the code look like? I am not sure if you are trying to solve a real problem or not. If you are solving a real problem, where you already happen to have an EMF model which you are required to generate code from, then I recommend to just do everything in Java using the existing tools built for EMF. If you decide to still keep working in Haskell, and it works out well, please share your solution because I think many people here will be very interested. wxHaskell, OOHaskell, and O'Haskell are all starting points for this type of project. - Brian ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] generate Haskell code from model
On 4/13/07, Steffen Mazanek [EMAIL PROTECTED] wrote: Hello everybody, I would like to start a discussion on how to generate best-practice Haskell code from a model, e.g. from EMF. I started learning Haskell precisely to solve problems like this. But, once I got into it, I realized that Haskell is a much better modeling language than the modeling language I was using (MOF/UML, the predecessors to EMF). Furthermore, all the infrastructure built on top of that modeling language was very easy to replace with Haskell code. As a result, I gave up that effort. You said The benefits of the model+generate approach are well known, however I disagree. W3C DOM, MOF, UML, CORBA, and NetBeans 3.x-4.x are all obvious examples of the failure of the model+generate approach. If the modeling language is sufficiently powerful, then it should be feasible to execute the models directly using a (custom-built) interpreter. If the modeling language is weak then it is better to just do the modeling in Haskell or another more powerful language. The MDA idea was that you would have one model and then be able to use that model in a variety of different programming languages, without having to rewrite code in each target language. Now, people are getting this benefit via a code, then translate approach. For example, GWT allows the developer to write Java code, then generate the equivalent Javascript, without any hand-wavy models in between. JRuby lets one write code in Ruby to be used by code in Java; IronPython does the same for other .NET languages. In fact, C# is basically the .NET counterpart to EMF. FWIW, I also think that data based languages like ERD, Relax NG, and XQuery/XPath/XML Schema are a much closer fit to Haskell than EMF. EMF is designed to be translated any object-oriented, class-based, (soley) subtype-polymorphic, single-dispatched, single-inheritance language; i.e. Java. In fact, EMF is really a Java-optimized subset of what was supposed to become part of MOF 2.0. - Brian ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell vs Ruby as a scripting language
On 2/10/07, Joel Reymont [EMAIL PROTECTED] wrote: Is anyone using Haskell as a scripting language in their app? I'm thinking of viable it would be to embed ghc in a Mac (Cocoa) app. Is your application primarily written in Haskell? If not, you would have to create an interface between that language and Haskell in order for your Haskell programs to manipulate your domain objects and user interface. I think people would be happy if you did this because then there would be a Haskell API for Cocoa, but it seems like a lot of work. My guess is that it would be easier to do such bindings for Javascript due to its dynamic nature and it being an object-oriented language. Also, several projects have embedded Javascript successfully and so you would have many examples to base your project on. Visual Haskell also embeds GHC (into Visual Studio). However, Visual Haskell makes my Visual Studio unstable and often unresponsive. Because of this, I decided to use a multi-process design when embeddeding GHC into an editor--I built a customized version of the stock GHC executable that accepts commands from its parent process instead of through the command line. This way, any crashing or similar problems in the modified compiler will not result in the user losing any work in the editor. Also, I don't have to worry about threading problems, I can more clearly monitor the memory usage of the compiler/code-analyzer distinctly from that of the editor, and it should also be easier to swap in another Haskell implementation (e.g. jhc, yhc) if I wanted to. But, there were disadvantages too (e.g. I had to implement my own lexer because doing doing it with GHC via IPC was too slow for interactive use). Also, I recommend looking into embedding YHC. I have not had a chance to use it yet, but it looks like it is a better fit to an interpreter-only embedding situation than GHC--with GHC, you are getting a lot more than you seem to be asking for. - Brian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Embedding ghc
If you have to ask, you can't afford it. :) The GHC API is not modular--if you use it, then the entirety of GHC is included into your program. That means that your executable will increase by the size of ghc.exe. Furthermore, you will also need to ship several of the supporting executables, alongside the result. Basically, using the GHC API is going to add nearly the entire size of the full GHC distribution to your application. On Windows, If you don't use via-C code generation, then you might be able to cut out some parts of the embedded GCC distribution (libraries, etc.) I once looked at modularizing GHC so that parts like the bytecode interpreter, via-C compilation, template haskell, and compilation to/from external core could be removed to reduce the size of the program. However, it is non-trivial to do this, and I didn't want to maintain a branch of GHC with these parts factored out. it would be better to come up with a proposal that the GHC maintainers will accept for the mainline, and then implement it, if you want a modular GHC. I would like to do so but currently I am busy on other projects. - Brian On 2/10/07, Joel Reymont [EMAIL PROTECTED] wrote: Has anyone tried embedding ghc into their app? How big are the resulting binaries? Thanks, Joel -- http://wagerlabs.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Configurations proposal
On 10/24/06, Duncan Coutts [EMAIL PROTECTED] wrote: On the other hand, in Gtk2Hs I know one case where we do this. We have aGraphics.UI.Gtk.Cairo api module that is only included if Gtk was builtagainst Cairo. In any case it could be faked by using cpp to just not export anything rather than not having the module exposed at all. Soit's not clear that it's worth banning. Or maybe making it slightlyharder is worth it so that people don't get in the habit. Couldn't you split this into Gtk and Gtk-Cairo packages, where the latter is only built if Cairo is available? Similarly, in your GUI example, couldn't you have seperate foo and foo-gui packages, and only build the foo-gui package if the GUI libraries are available? Otherwise, how can you say I depend on the Gtk package being built with Cairo support and I depend on the GUI portion of the foo package?In general, optional groups of modules should be split off into separate packages, and there should be a way of building a bundle of related packages together (just like one can build a group of related executables together already). Regards,Brian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Read a single char
On 10/23/06, Neil Mitchell [EMAIL PROTECTED] wrote: HigetChar doesn't return until I press Enter. I need something that returns immediately after I press any key.It's a problem with buffering:hSetBuffering stdin NoBuffering This usually doesn't work on Windows:GHC 6.4.2 and 6.6: requires enterHugs (console) Sept. 2006: requires enter WinHugs (GUI) Sept. 2006: works as expectedBut it seems to work on Linux:GHC 6.4.1 on Ubuntu 6.06: works as expectedGHC 6.6 on Ubuntu 6.06: works as expectedI am really interested in hearing of a solution that works on all platforms. import IO main = do hSetBuffering stdin NoBuffering hGetChar stdinRegards,Brian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: A better syntax for qualified operators?
On Wed, Oct 18, 2006 at 04:42:00PM +0100, Simon Marlow wrote: So you won't be able to colour patterns differently from expressions, that doesn't seem any worse than the context vs. type issue.Indeed, I'm not even sure you can colour types vs. values properly, look at this: data T = C [Int] at this point, is C a constructor?What if I continue the declaration like this: data T = C [Int] `F`Haskell is easier than Java in this type of situation because Haskell's VARID and CONID are the same whereas Java's VARID and CONID are lexically equivalent. Modern Java IDE's color them correctly by doing (at least) two passes of highlighting: one during lexing and one after renaming/typechecking. As a result, they color identifiers based on lots of semantic information including their scope, visibility, and other factors. IntelliJ will even do data flow analsys to color an identifier differently within a single method depending on whether or not the variable can be null at each occurrence. I think that an editor for Haskell would need to use a similar technique to be useful. For example, I want top-level values colored differently than local values, and I want exported, non-exported, imported, and unbound identifiers highlighted differently. And, I want parameters to be highlighted based on their strictness (determined automatically). This cannot generally be done until the entire module (as well as all of the modules it depends on) have been parsed. In summary, I think that doing any syntax highlighting or other analysis of a Haskell module before it has gone through the renaming phase is a dead end.Regards,Brian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Writing forum software in Haskell
On 9/24/06, David House [EMAIL PROTECTED] wrote: Hi all.The recent thread on email vs. forums inspired a bit of interest toget some forum software on haskell.org. Were we to go ahead with this,I think it'd be great to have this software written in Haskell itself. If you were to write all-new software, then I suggest:* Haskell.org single-sign-on: One username/password combination that is valid across all Haskell.org services (wiki, forums, mailing lists, CVS, etc.)* A forum-like interface to the existing mailing lists: Instead of building full-blown forum, why not create a web interface that nicely organizes the archives of the existing mailing lists into a forum-like interface. Then, add a mechanism for sending messages to those mailing lists through a forum-like posting interface (using username/password), which would result in a message being sent to the mailing lists. A lot of existing Haskell users are content with the mailing lists, I believe. Forums are not going to add a lot of value to the existing users. So, unless there is some gateway between the existing mailing lists and the forums, you should expect that most experienced Haskellers would likely ignore the forums altogether. That would make the forums less than useful, if questions were being asked in the forums that can only be answered by people that are exclusively using the mailing lists. Regards,Brian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Are associated types synonyms like type classes?
Bulat, Stefan,My question wasn't clear. I understand already that classes with associated types are are alternative to MPTC with fundeps.When using AT then we have to decide what part of the abstraction is the class and what part is the associated type. Sometimes this seams arbitrary. If we have: class A a where type B bf :: a - B b instance A Int where type B = Bool f = (==0)Can't we also rewrite it as: class B b where type A a f :: A a - b instance B Bool where type A = Int f = (==0)What is the practical difference between class A and class B? With class A we can define instances so that f is overloaded (Int - Bool), (String - Bool), (Bool - Bool) by defining instances of A for Int, String, and Bool, but we cannot overload it more (Int - String), (Int - Int) because we can only have on instance A Int. The converse is true for class B. Is that the only difference? If so, then why do we have associated types instance of associated classes like this?: class A a where class B b where f :: a - b instance A Int where instance B Bool where f = (==0)The made-up syntax I presented in my previous message was just a different way of writing the above (which is also made-up syntax): class A a, B b where f :: a - b instance A Int, B Bool where f = (==0) class Elem elem, Collect collect where empty :: collect insert :: elem - collect - collect toList :: collect - [elem] Thanks for your help,BrianOn 9/1/06, Bulat Ziganshin [EMAIL PROTECTED] wrote: Hello Brian,Friday, September 1, 2006, 8:32:55 PM, you wrote: I read the easy parts of the Associated Types with Class and Associated Type Synonyms papers. An associated type synonym seems to kind of work similarly to a restricted form of class. In what way are the two following examples different? -- define a class with a type synonym, and a set of operations class A a where type B b foo :: a - B b instance A Int where type B = Bool foo = (==0) -- define two classes, and an function that . class A a, B b where foo :: a - b instance A Int, B Bool where foo = (==0)where you've find such unusual syntax? :)GHC/Hugs supportsmulti-parameter type classes (MPTC): class AB a b wherefoo :: a - binstance AB Int Bool where foo = (==0)AT replaces MPTC with FD (functional dependency), which allows tospecify which type parameter of MPTC is detremined by another one, i.e.:class AB a b | a-b where for further details about MPTC+FD see chapter 7.1.1 in thehttp://cvs.haskell.org/Hugs/pages/hugsman/exts.html Also, has anybody written a paper on the differences between typeclasses + associated types and ML's module system + overloading?ML Modules and Haskell Type Classes: A Constructive Comparison http://www.informatik.uni-freiburg.de/~wehr/diplom/Wehr_ML_modules_and_Haskell_type_classes.pdf --Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Are associated types synonyms like type classes?
I read the easy parts of the Associated Types with Class and Associated Type Synonyms papers. An associated type synonym seems to kind of work similarly to a restricted form of class. In what way are the two following examples different? -- define a class with a type synonym, and a set of operations class A a where type B b foo :: a - B b instance A Int where type B = Bool foo = (==0) -- define two classes, and an function that . class A a, B b where foo :: a - b instance A Int, B Bool where foo = (==0)Also, has anybody written a paper on the differences between typeclasses + associated types and ML's module system + overloading? Thanks,Brian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Proposal to allow {} instead of () in contexts
On 8/23/06, Brian Hulley [EMAIL PROTECTED] wrote: Hi -Disregarding my last proposal which involved the use of {} in types, I amwondering if anyone would agree with me that it would be a good idea to use{} instead of () when writing out the context ie: foo :: (Num a, Bar a) = a - awould become:foo :: {Num a, Bar a} = a - aand the same for every other situation in the language where a contextappears.My reasons are twofold: 1) The context is supposed to be understood (afaiu) as a *set* ofconstraints not a *tuple* of constraints (ie what relevance does theordering implied by the tuple notation have here?), so the using of set braces seems mathematically more appropriateI just started programming in Haskell again recently and I cannot even think of a case where any kind of brackets should be necessary. In the report [1], it clearly shows that a context is always followed by =. Are the parantheses just used to reduce lookahead requirements for parsers? If so, perhaps the parentheses should be made optional to make them easier to read for people. Plus, then there would not be any tuple vs. set confusion. BTW, at least GHC allows duplicates in the context, like [ foo :: (Num a, Num a) = a - a ], so I don't know if calling it a set is really appropriate either. 2) It would allow an editor to give the correct fontification for anincomplete type _expression_. At the moment, if you'd just typed:foo :: (Barthe editor would have no way of knowing if Bar is a classcon or a tycon, whereas with:foo :: {Barthe opening brace immediately informs the editor that the following textshould be parsed as a context and so it could fontify Bar appropriately.What about [ foo :: Bar ] when typing [ foo :: Bar a = a - a ]? It would be a mistake to require the grouping symbols even when there is only one element in the context. I think that the editor has to be know enough about the program to distinguish classes and type constructors without any grouping symbol requirement. - Brian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Fwd: [Haskell-cafe] A restricted subset of CPP included in a revisionof Haskell 98
On 8/20/06, Brian Hulley [EMAIL PROTECTED] wrote: Henning Thielemann wrote: On Thu, 17 Aug 2006, Brian Smith wrote: I find it strange that right now almost every Haskell program directly or indirectly (through FPTOOLS) depends on CPP, yet there is no effort to replace CPP with something better or standardize its usage in Haskell. I think there should be more effort to avoid CPP completely.I agree, especially as I'm trying to write an editor for Haskell which will certainly not cope with CPP at all! ;-)I agree with this sentiment. But, CPP also has some advantages over the other suggestions presented here. Using CPP, we can create libraries that work with GHC 5.x and old versions of Hugs. If an alternative to CPP were chosen, would we update fptools to use this alternative? If it involves new syntax, then updating FPTOOLS to use the new syntax would break backward compatibility. And, if (the vast majority of) FPTOOLS does not use the solution then it is not useful. The reason it would not cope is that CPP turns what would otherwise be one program/module/library into several programs/modules/libraries whichsimultaneously co-exist in the same text in a rather uneasy and vaguerelationship, and what's even worse: the same module can have multiple meanings in the *same* program depending on use of #ifdef #undef etc, thusmaking code navigation quite impossible: the meaning of each module nowdepends on how you got there and might even be different the second time round...Notice that with my suggestions none of these problems apply, because all external command line parameters have a single static value over the whole program (two modules cannot depend on differing values of a single macro, #undef is not allowed). Changing the bindings for these macros would invalidate any cached information the editor has. But, the editor has to support updating its caches anyway, to deal with switching libraries in/out, etc. I won't use an IDE effectively if it can't handle the libraries in FPTOOLS, which liberally use the preprocessor. Either the IDE has to handle the preprocessor or the libraries have to stop using them, or they have to meet somewhere in the middle. I think the acid test would be to reach a point where anyone can download the source for some large program such as GHC and just type ghc --make Main and expect the program to be built in one pass with no problems.I agree with this sentiment as well. Most of the little data processing programs I have written are built exactly like that already. How can we get the existing libraries to build like that though? - Brian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Description of Haskell extensions used by FPTOOLS
Simon,I am familiar with the GHC library as I had used it a year or so ago to create a very cheap Haddock knockoff. I used the GHC library to do the type inference (which Haddock didn't do at the time) and to deal with elements that didn't have source code available. I remember that I created it specifically because I couldn't get Haddock to work on the GHC API in a useful way. I am looking forward to the result of the SoC project. IIRC, the GHC API needs to be modified to provide (better) support for parsing and typechecking code with syntax errors. That is something that I can probably do when I get to that point.Even if a tool is implemented using the GHC library, it is likely that it would need to limit itself to some subset of GHC's features. For example, an IDE that had a define instance feature would need special code to deal with associated types. Similarly, it is better for a tool to refuse to operate on code using implicit parameters rather than (silently) fail to handle them properly, because people tend to hate tools that are really convenienty but occasionally mess up their programs. Regards, BrianOn 8/18/06, Simon Peyton-Jones [EMAIL PROTECTED] wrote: Brian Great! You might like to consider using GHC as a library http://haskell.org/haskellwiki/GHC/As_a_library The advantage is that you just import GHC and then you can parse all of Haskell (including GHC's extensions). Then you can rename it to resolve lexical scopes, typecheck, and so on. It will certainly deal with all of Darcs… because GHC compiles Darcs. It's all supposed to be a good basis for tools that consume and analyse Haskell programs, which is exactly what you propose to do. Example, there's a summer-of-code project to use it for Haddock. That said, the API is really just what we needed to build GHC itself. It needs a serious design effort. One of the things that would motivate such an effort would be customers saying I needed to do X with the API and it was inconvenient/impossible. Still, it does work, today. Simon From: [EMAIL PROTECTED] [mailto: [EMAIL PROTECTED]] On Behalf Of Brian Smith Sent: 17 August 2006 17:01 To: haskell-cafe@haskell.org Subject: [Haskell-cafe] Description of Haskell extensions used by FPTOOLS Is there any design document for the FPTOOLS libraries or some description of language features that are (allowed to be) used in them? I am going to be taking some significant time off from my normal jobs in the upcoming months. During part of that time, I would like to do some work to improve the Haskell toolchain. This involves creating or improving tools that parse and analyze Haskell code. My goal is to have these tools support enough of Haskell to be able to handle at least the most important libraries used by Haskell programmers. In particular, this includes all or most of the libraries in FPTOOLS. Plus, I want these tools to operate on Darcs as it is an obvious poster-child for Haskell. Thus, I need to support Haskell 98 plus all the extensions being used in Darcs and FPTOOLS as of approx. March, 2007 (as I intened to start working again at that time). It would be very nice if there was some document that described Haskell 98 plus all the extensions being used in Darcs and FPTOOLS as of March, 2007. Besides being useful to me, it would be a useful guide for potential contributors to FPTOOLS. Regards, Brian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Description of Haskell extensions used by FPTOOLS
Is there any design document for the FPTOOLS libraries or some description of language features that are (allowed to be) used in them?I am going to be taking some significant time off from my normal jobs in the upcoming months. During part of that time, I would like to do some work to improve the Haskell toolchain. This involves creating or improving tools that parse and analyze Haskell code. My goal is to have these tools support enough of Haskell to be able to handle at least the most important libraries used by Haskell programmers. In particular, this includes all or most of the libraries in FPTOOLS. Plus, I want these tools to operate on Darcs as it is an obvious poster-child for Haskell. Thus, I need to support Haskell 98 plus all the extensions being used in Darcs and FPTOOLS as of approx. March, 2007 (as I intened to start working again at that time). It would be very nice if there was some document that described Haskell 98 plus all the extensions being used in Darcs and FPTOOLS as of March, 2007. Besides being useful to me, it would be a useful guide for potential contributors to FPTOOLS. Regards,Brian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] A restricted subset of CPP included in a revision of Haskell 98
Hi, I find it strange that right now almost every Haskell program directly or indirectly (through FPTOOLS) depends on CPP, yet there is no effort to replace CPP with something better or standardize its usage in Haskell. According to the following document, and my own limited experience in reading Haskell code, CPP is the most frequently used extension: http://hackage.haskell.org/trac/haskell-prime/wiki/HaskellExtensions I think that if we accepted that CPP was part of the language, we could then place some restrictions on its use to facilitate easier parsing. Here are some suggestions, off the top of my head:* #define can only be used for parameterless definitions* #define'd symbols are only visible to the preprocessor * #define can only give a symbol a value that is a valid preprocessor _expression_* #define can only appear above the module declaration* a preprocessor symbol, once defined, cannot be undefined or redefined * #include and #undef are prohibited* The preprocessor can only be used at the top level. In particular, a prepropcessor conditional, #error, #warn, #line would not be allowed within the export list or within a top-level binding. * A Haskell program must assume that any top-level symbol definitions are constant over the entire program. For example, a program must not depend on having one module compiled with one set of command-line preprocessor symbol bindings and another module defined with a different set of bindings. * preprocessor directives must obey Haskell's layout rules. For example, an #if cannot be indented more than the bindings it contains.The result would be:* Syntax can be fully checked without knowing the values of any preprocessor symbols. * Preprocessor syntax can be added easily to a Haskell parser's BNF description of Haskell.* No tool will need to support per-file/module preprocessor symbol bindings.Again, all this is just off the top of my head. I am curious about what problems these restrictions might cause, especially for existing programs. I know that GHC itself uses some features that would be prohibited here. But, GHC is really difficult for tools to handle even with these restrictions on its source code. For now, I am more interested in the libraries in FPTOOLS and users' programs. What libraries/programs cannot easily be reorganizated to meet these restrictions? I suspect #define'd symbols are only visible to the preprocessor would be the most troublesome one. Thanks,Brian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] A restricted subset of CPP included in a revision of Haskell 98
On 8/17/06, John Meacham [EMAIL PROTECTED] wrote: On Thu, Aug 17, 2006 at 11:44:17AM -0500, Brian Smith wrote: Hi, I find it strange that right now almost every Haskell program directly or indirectly (through FPTOOLS) depends on CPP, yet there is no effort to replace CPP with something better or standardize its usage in Haskell. see this paper for some interesting work on the subject.http://citeseer.ist.psu.edu/wansbrough99macros.htmlThanks for that. I should have not said there is no effort to replace CPP before. I hope I did not offend anybody that has worked on this problem previously. I was also mistaken in saying that syntax could be fully checked without knowing any preprocessor symbol bindings. This is only true if one gets rid of the ability to choose between two syntaxes via the preprocessor. But, if we allow syntax that we can't parse (but presumably another implementation can), then the preprocesor must remain a true preprocessor. Then there isn't much reason to place so many restrictions on where the various preprocessor directives may appear. I proposed to limit where #define could appear mostly for asthetic reasons. If #define, #error, and #warn only appear at the beginning of a file, then the rest of the file would only contain Haskell syntax in between #if...#else...#endif. Also, a refactoring tool would not have these directives get in its way. I want to have conditionals limited in their placement to make things easier for refactoring tools. But, I don't have any ideas about how to deal with conditional exports without allowing preprocessor conditionals in the export list. * #define can only be used for parameterless definitions* #define'd symbols are only visible to the preprocessor * #define can only give a symbol a value that is a valid preprocessor _expression_* #define, #error, and #warn can only appear above the module declaration* a preprocessor symbol, once defined, cannot be undefined or redefined with a different value * #include and #undef are prohibited* The preprocessor can only be used at the top level. In particular, a prepropcessor conditional or #line would not be allowed within the export list or within a top-level binding. * A Haskell program must assume that any top-level symbol definitions are constant over the entire program. For example, a program must not depend on having one module compiled with one set of command-line preprocessor symbol bindings and another module defined with a different set of bindings. * preprocessor directives must loosely obey * #define can only be used for parameterless definitions* #define'd symbols are only visible to the preprocessor * #define can only give a symbol a value that is a valid preprocessor _expression_* #define can only appear above the module declaration* a preprocessor symbol, once defined, cannot be undefined or redefined * #include and #undef are prohibited* The preprocessor can only be used at the top level. In particular, a prepropcessor conditional, #error, #warn, #line would not be allowed within the export list or within a top-level binding. * A Haskell program must assume that any top-level symbol definitions are constant over the entire program. For example, a program must not depend on having one module compiled with one set of command-line preprocessor symbol bindings and another module defined with a different set of bindings. * preprocessor directives must obey a very simple layout rule: an #if, #else, or #endif cannot be indented more than the bindings it contains. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] RE: ANN: System.FilePath 0.9
Hi Neil,On 7/17/06, Neil Mitchell [EMAIL PROTECTED] wrote: Hi Brian,You sent this email just to me, and not to the list. If you indendedto send to the list then feel free to forward my bits on to the list. I know that FilePath is defined by Haskell '98 as a String and so it cannot be changed. So, perhaps a new type or class should be created for this library (hereafter GoodPath, although I am not suggesting that is the best name).The problem is people will have to marshal their data into this GoodPath, and marshal it out again. When people can shortcut thatmarshalling, as the current readFile/writeFile definitions ensure theycan, they will. At that point you loose all safety because people will abuse it.I disagree. It would be trivial to create a new module that exported new definitions of file IO actions that operated on GoodPath instead of FilePath, transparently delegating to the original readFile/writeFile/etc. until they could be removed in the future. This would also support the SuperFilePath idea you mentioned. Another thing I thought of would be a canonicalPath IO action (canonicalPath :: FilePath - IO FilePath) that returns a FilePath that implements case-preserving-case-insensitive matching. For example, if there is a file named Hello There.txt in C:\, then(canonicalPath c:\hello there.txt ) would give C:\Hello There.txt).I think that the xxxDrive functions should only be exported from System.FilePath.Windows and no System.FilePath since it is unclear as to how they should be used effectively by cross-platform software.- Brian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] RE: ANN: System.FilePath 0.9
[ I tried to send this through the fa.haskell USENET group but apparently it did not go through]I kind of expect that a Haskell library for file paths will use the type system to ensure some useful properties about the paths. For example, when writing security-conscious software I often want to be able to distinguish between absolute, ascending (paths with leading ../ and similar items), and decending paths (paths that contain no ../). I want to make sure a filename is valid. For example, prn and con are not valid path elements for normal files on Windows, certain characters are not allowed in filenames (depending on platform), some platforms may require paths to be escaped in different ways. I see there is a isValid function and even a (magical) makeValid function, but they do not report what was wrong with the filename in the first place. Furthermore, it isn't clear from the documentation how these functions determine whether a filename is valid. Many people might be familiar with Joel Spolsky's article Making Wrong Code Look Wrong at http://www.joelonsoftware.com/articles/Wrong.html . If you read that article, you will see that advocates a variable naming convenctions to distinguish between HTML-escaped safe and non-escaped unsafe strings. In Haskell, we can do much better than that by using convenient strong static typing to enforce such constraints. We should be able to use type signatures to ensure that we are operating on syntactically valid paths of a certain form. For example, a function's type should indicate that it expects an absolute path, or a relative path that doesn't use ../../.. to decend into parent/sibling directories unexpectedly, which is a common exploit technique. IMO, safety is the most important issue regarding file paths and it is not addressed in this library as far as I can see. Writing code to handle these issues is tedious, error-prone, and boring to write despite being critical. It isn't the kind of code that you want to just download off of some guy's webpage. Basically, it is exactly the type of thing that belongs in a standard library. In this library proposal, there are a bunch of xxxDrive functions that many Unix-oriented programmers are sure to ignore because they are no-ops on Unixy systems. Even on Windows, they are not very useful: Windows software almost never cares what drive a directory is on, especially because the directory might not be on a drive at all (for example, a UNC network path \\foo\bar has no drive component but it is an absolute path). Really, Windows software usually works similarly to Unixy software in this regard--the big difference is that Unixy systems have one root path, whereas Windows systems can have many, many root paths (drives and UNC's). - Brian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Updating the Haskell Standard
On 7/20/05, John Goerzen [EMAIL PROTECTED] wrote: There was a brief discussion on #haskell today about the Haskell standard. I'd like to get opinions from more people, and ask if there is any effort being done in this direction presently. I think an updated standard is overdue. I find it difficult anymore to write any but the most trivial of programs using pure Haskell 98. Some notable, and widely-used, features developed since then include: * Overlapping instances * FFI * Hierarchical namespace * Undecidable instances Even if undecidable instances was standardized, would we want it turned on by default? I am trying to write real programs in Haskell and I have never even comtemplated using undecidable instances. My understanding is that they can be unintuitve, and they can cause typechecking to fail to halt. So, it seems reasonable to require that undecidable instances require some kind of option to be present. Thus, there would be a standard undecidable instances option or pragma. Now, it seems reasonable that, if we can standardize the option for undecidable instances, we could do the same thing for all new features we wish to add to Haskell 2. This is basically what the Cabal {-# LANGUAGE UndecidableInstances CPP PatternGuards ... #-} pragma does. Each implementation would have a set of pragmas that it supports. It would be best if the implementors agree on a specification for each feature, so that, e.g. -# LANGUAGE UndecidableInstances #-} works identically wherever it is supported. Eventually, we would all look around at each other and realize hey, GHC, Hugs, and NHC all support pragmas A, B, C, ... and these pragmas are so useful they should be available by default Then, we could make a new option: {-# LANGUAGE Haskell 2005 #-} This would be equivalent to {-# LANGUAGE A B C ... #-}. Then, we would say that, if {-# LANGUAGE Haskell x #-} is omitted, then x defaults to 98. Note that this works for deletions too {-# LANGUAGE NoDeletedFeature #-}. I imagine something similar would work for libraries: every implementation would build up a set of libraries its supports by default. We would recognize the common set of packages supported and say this set of packages is the Haskell 2005 standard library. The bad thing about this is that Hellow World, Haskell 2005 would become kind of ugly: {-# LANGUAGE Haskell 2005 #-} main = putStrLn Hello, World! But, of course, Haskell 2005 would be backwards compatible enough to support the 98 version: main = putStrLn Hello, World! I guess there must be some reason that this scheme is really horrible because I don't know of any language that has ever done things this way. But, it seems to make sense to me... - Brian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Cabal with Alex and and Happy
Is there an example of how to build a Cabal package that has a lexer generated with Alex and a parser generated with Happy? As far as I can tell, the way to do this is to add Other-Modules: Module.Name.Of.Parser.y Module.Name.Of.Lexer.x to each executable/library stanza. But, when I try this, I get: Could not find module `GHC.Exts': it is a member of package base-1.0, which is hidden The generated parser code contains: #if __GLASGOW_HASKELL__ = 503 import GHC.Exts #else import GlaExts #endif Also, I don't see any way of passing options to preprocessors using Cabal. In particular, how do you pass options to Happy? I am using GHC and Cabal from the CVS head, built about two weeks ago. Thanks, Brian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Visual Hashell Studio.NET 2005
Hi, When will VHS support the Visual Studio.NET 2005 Beta? I'd like to volunteer to test VHS.NET 2005 when it is available. (Also, MS is giving away the VS.NET 2005 beta for free, and VS.NET 2003 costs a whopping $15.00 from my school's bookstore). Thanks, Brian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: OCaml list sees abysmal Language Shootout results
On Mon, 11 Oct 2004 14:16:36 -0700, John Meacham [EMAIL PROTECTED] wrote: n Mon, Oct 11, 2004 at 12:22:13PM +0100, Malcolm Wallace wrote: So is it fair to compare the default lazy Haskell solution with all the eager solutions out there that laboriously do all this unnecessary work? Apparently not, so we have gone to all kinds of trouble to slow the Haskell solution down, make it over-strict, do the work N times, and thereby have a fair performance test. Huh. I think the naive way is perfectly fair. If haskell has to live with the disadvantages of lazy evaluation, it only makes sense we should be able to take advantage of the advantages. The fact that haskell doesn't have to compute those intermediate values is a real advantage which should be reflected in the results IMHO. John This seems especially true if you have to add extra lines of code to make the tests fair, because this extra code counts against Haskell in the lines-of-code metric. Personally, I am more impressed by the lines-of-code metrics than I am by the performance metrics. - Brian ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Control.Monad.Error with a custom error type
I have: data Reference = Ref [String] String data ReferenceError = RefError { expectedType :: String -- type of element we were looking for (e.g. type,package) , pointOfError :: Reference-- path to deepest parent element not found in path } type ReferenceMonad = Either ReferenceError I want to write functions that use Either ReferenceError a as as the error monad, instead of the more common Either String a. In particular, I want to use be able to write: type Model = [(String,Type)] findType :: Model - Reference - ReferenceMonad Type findType m - r@(Ref [] name) = case lookup ((==name) . nameOf) m of Nothing - throwError r Just x - return x I know that I could make this work by making ReferenceError an instance of the Error class, but I cannot provide meaningful implementations of noMsg and strMsg for ReferenceError. So, it seems instead I need to make (Either ReferenceError) an instance of MonadError. However, when I try, I get: instance MonadError (Either ReferenceError) Kind error: `Either ReferenceError' is not applied to enough type arguments When checking kinds in `MonadError (Either ReferenceError)' In the instance declaration for `MonadError (Either ReferenceError)' So, how do I get the effect I want for findType? Besides throwError I also want to use catchError. Thanks, Brian (Haskell newbie) ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe