[Haskell] Haskell Job Opportunity
HAppS LLC has part-time and full-time positions open for haskell programmers to: * improve the open source haskell codebase at HAppS.org * implement infrastructure to make it work well in Amazon S3/EC2 environments * make http://pass.net reliable enough to be used by live apps * build the mass market apps we want to run on top of the HAppS/Pass.net platform We are looking for people who: * have substantial experience programming Haskell * have experience building Internet apps (not necessarily in Haskell but would be good), and * live in any of these places: * the Internet (with comfort working entirely over email,skype,tightVNC etc.) * New York * San Francisco * Los Angeles If you qualify and are interested, please send an email with your cv/resume to me or to AT . -Alex- ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] -compiler-options vs {#-LANGUAGE Flags-#}
I think the LANGUAGE pragma is better than the OPTIONS_GHC pragma but I am relatively indifferent on that issue. My only point is that module authors should put this information in the module source rather than in the build system. -Alex- On Fri, 23 Mar 2007, Alec Berryman wrote: S. Alexander Jacobson on 2007-03-23 17:16:26 -0400: The correct answer, I believe, is to require that module authors put flags in the module themselves where they belong. Does the OPTIONS_GHC pragma solve your issue? http://www.haskell.org/ghc/docs/latest/html/users_guide/using-ghc.html#source-file-options ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] -compiler-options vs {#-LANGUAGE Flags-#}
In theory, Haskell modules should contain all information necessary to interpret/compile them properly. In practice, many modules don't because their authors are flagging required language extensions in build systems rather than in module sources. This practice causes problems because then it becomes much hard to move modules between packages or to use a particular module outside of a particular build system context. I experience this problem when I use SearchPath to build HAppS applications. For example, $ sp runhaskell MyModule.hs will recursively download imported modules into a local directory and then run: $ runhaskell -i.haskell-cache MyModule.hs If all the imported modules contain all the right flags in their source then MyModule.main will just run (no need for manual recursion of through a cabal package depency tree!). If some of the imported modules do not include the correct flags, then an error message will result and I am then forced to do: $ sp runhaskell MyModule.hs -cpp -fglasgow-exts -etc And this is bad because (1) some of the modules may not want a particular extension turned on and (2) it is annoying to the user to have to guess at required extensions until they are all added on the command line. Problem (1) is probably small as evidenced by cabals use of global extensions without much complaint. Problem (2) is bigger. Most other modern languages e.g. Java, Python, Perl, Ruby, etc. don't rely on the compiler/interpreter command line options to designate language extensions. Neither should haskell. The correct answer, I believe, is to require that module authors put flags in the module themselves where they belong. At very least it should be considred bad style not to have them in your code and really what would make more sense is for compiler writers to disable command line flags that specify lanaguge-extensions. -Alex- PS Until this is resolved, though it is a hack, I will probably have searchpath automatically add all ghc language extension flags to the command line if the user is using ghc, runghc, or ghci i.e.: "-cpp -fth -fglasgow-exts -fallow-undecidable-instances -fallow-incoherent-instances -fallow-overlapping-instances" Another solution would be to have searchpath map files also carry cabal file URLs and get the flag from there, but some people still use makefiles or other randomness and I would like to make Searchpath work with java, perl,python, and ruby without much change. ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] Re: STM applications and examples?
The HAppS MACID monad runs on top of STM. It is open source and available via darcs at http://happs.org http://Pass.net is a production application that uses this infrastructure. The source ofr pass.net is available via darcs get http://pass.net/s/repo Hope that helps. -Alex- On Tue, 13 Jun 2006, Simon Marlow wrote: Hi Folks, I'm interested in gathering information about existing STM applications and example code that people might be working on or have lying around. It would be great to maintain a list of such applications and example code on the wiki somewhere, and use it as a resource for learning STM and research into STM implementations. If you have an application using STM, please let us know. If you can supply the code, even better. Similarly if you have some STM code that might be suitable as samples or benchmarks, we'd love to collect it. Thanks! Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ______ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] ANNOUNCE: Google Summer of Code Projects
Is there a link to the project descriptions somewhere. They look really interesting. -alex- On Wed, 24 May 2006, Donald Bruce Stewart wrote: We are very pleased to announce that nine Haskell projects have been selected to receive funding to the value of $45k under Google's 2006 Summer of Code program. A wide range of projects will be worked on, contributing to the community important tools and libraries. The students have until August 21 to complete their projects, and receive their grants. The Haskell.org team of mentors would like to thank Google for recognizing the importance of supporting the Haskell language and community. This year was extremely competitive, with over 110 Haskell project submissions. Of these, 38 projects received high scores and willing mentors, and 66 received positive reviews overall. We hope that many of the unsuccessful projects will be worked on nonetheless. The following projects were successful. Congratulations to these students! * Fast Mutable Collection Types for Haskell, Caio Marcelo de Oliveira Filho, Mentor: Audrey Tang * Port Haddock to use GHC, David Waern, Mentor: Simon Marlow * A model for client-side scripts with HSP, Joel Bj?rnson, Mentor: Niklas Broberg * GHCi based debugger for Haskell, Jos? Iborra L?pez, Mentor: David Himmelstrup * HaskellNet, Jun Mukai, Mentor: Shae Erisson * Language.C - a C parser written in Haskell, Marc Ernst Eddy van Woerkom, Mentor: Manuel Chakravarty * Implement a better type checker for Yhc, Mathieu Boespflug, Mentor: Malcolm Wallace * Thin out cabal-get and integrate in GHC, Paolo Martini, Mentor: Isaac Jones * Unicode ByteString, Data.Rope, Parsec for generic strings, Spencer Janssen, Mentor: Don Stewart We wish the students good luck and good hacking! The Haskell.org Team. ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell ______ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] monads, records, verbosity
Much of the discussion here recently has been related to debate about complexifying the monad hierarchy. The fact that Haskell record syntax is abysmal and the verbosity of various possible solutions. They appear to interrelate. Would it be possible/reasonable to get rid of data, class, and instance declarations and rely on type inference for everything? Here is my strawman version: * functions definitions imply the data declarations required foo True = Just "abc" foo False = Nothing ==implies==> data A1 = True | False data A2 a = Just a | Nothing * all function definitions are actually instance declarations of an implicit class that implements that function so e.g. foo True = Just "abc" foo False = Nothing ==implies==> class Foo a b where foo::a->b instance Foo A1 A2 where foo True = Just "abc" foo False = Nothing * multiple definitions of the same function imply distinct instance declarations: foo 0 = Nothing foo x = Just (x+1) ==implies ==> class Foo a b where foo::a->b -- same as above definition of Foo instance Foo A1 (A2 A1) where . * field labels happen in context mkPot = Pot {profit=0,amounts=[]} updatePot p = p {profit \= (1+),amount \= (1:)} getProfit p = profit p == implies ==> data A3 = Pot Int [Int] class Profit a where profit::a->Int class Amounts a where amounts::a->[Int] instance Profit A3 where profit (Pot a _)=a instance Amounts A3 where amounts (Pot _ a)=a class UpdatePot a where updatePot::a->a instance UpdatePot A3 where updatePot (Pot a b) = Pot (1+a) (1:b) getProfit p = profit p * default instances are as follows foo a b c = b -- the default instance foo (a::Pot) b c = c -- the specialized instance Is this possible/reasonable? -Alex- __________ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] Re: ECT and SearchPath
On Tue, 13 Dec 2005, Sven Moritz Hallberg wrote: is a good scheme. For it to _really_ work, however, we should form a registry of "officially assigned" module names (i.e. ones without a disambiguation suffix) and also one for the author and/or organization names to be used as suffixes. I for example am known by the nickname Pesco, or (more recently) by the mnemonic SM at domain KHJK.ORG. My solution to this problem in SearchPath is module maps. SearchPath lets you use multiple module maps simultaneously. So you can for example simultaneously use a global module map as haskell.org, an industry module map at yourassociate.org, an internal company map located on your_intranet, and a personal map located on your computer. Conflicts are handled by the order in which you invoke the maps. So a map is official to the extent your community shares it. Different entities can also compete to provide maps of differing qualities e.g. is the map audited for security issues? does the map point to the repository head of all modules or a static version? etc. Clay Shirky famously observed that in namespace construction there is a pretty direct tradeoff between have names be memorable, global, and non-political. I think shared module maps represents a good compromise. I just read your ECT article which seems to start with the same assumption and takes the theory to its logical conclusion. I take that as a compliment. ;) Thank you. It was intended that way! :-) The main issue that springs to my mind is authenticity control. How do I know the module I'm downloading is the one I want? Note that this question is always there, even in the local case, but it is usually not seen as a problem until module sources are distributed over the Net. I think https URLs solve this problem. You choose module maps from authors you trust and access those maps via https. These "friendly" module maps then point to "friendly" modules that you can also access via https. Since, you choose which CAs you trust to sign server keys, you are safe from MITM attacks. Note, I thought about supporting pgp but there is no obvious way to sign a CVS/SVN/darcs repository and what I like about ECT is that you can point to such repositories rather than specific checked out versions in the comfort that things won't break. I LOVE that in combination with ECT, you get notified that you are using a deprecated version on the next compile without having to check all your imports manually. FYI, the main thing that I would really like, but don't have and can't implement, is for :r in ghci to invoke SearchPath to hunt for missing modules rather than forcing you to quit out and restart ghci every time you add an import of an undownloaded module in your code. Using searcpath make thing very smooth, but this feature would make it completely so. -Alex- ______ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] implicit responses/values
I am now using implicit parameters quite happily. But I also feel the need for the reverse. There are occasions where I want to pass a value back through the set of calling functions e.g. produce HTTP cache header info through functions that are not really about that. I am not sure how this would work, but it strikes me as useful. Anything like that in the offing? -Alex- __ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
RE: [Haskell] Realistic max size of GHC heap
Should one interpret this as GHC now targets 64-bit systems or does one need to employ some sort of clevernesss to use this much memory? (I posted this question a while ago and was told that GHC did not at that time support 64-bit so could not use that much memory) On a related note, does GHC now distribute IO threads over multiple CPUs or is it still a 1 CPU system? -Alex- __ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com On Thu, 15 Sep 2005, Simon Marlow wrote: On 15 September 2005 01:04, Karl Grapone wrote: I'm considering using haskell for a system that could, potentially, need 5GB-10GB of live data. My intention is to use GHC on Opteron boxes which will give me a max of 16GB-32GB of real ram. I gather that GHC is close to being ported to amd64. Is it a realistic goal to operate with a heap size this large in GHC? The great majority of this data will be very long tenured, so I'm hoping that it'll be possible to configure the GC to not need to much peak memory during the collection phase. It'll be a good stress test for the GC, at least. There are no reasons in principle why you can't have a heap this big, but major collections are going to take a long time. It sounds like in your case most of this data is effectively static, so in fact a major collection will be of little use. Generational collection tries to deal with this in an adaptive way: long-lived data gets traversed less and less often as the program runs, as long as you have enough generations. But if the programmer really knows that a large chunk of data is going to be live for a long time, it would be interesting to see whether this information could be fed back in a way that the GC can take advantage of it. I'm sure there must be existing techniques for this sort of thing. Cheers, Simon ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] ANN: SearchPath 0.5 and Request for https module URLs
At Simon Marlow's suggestion, I have implemented Internet import chasing. You wrap your compiler/interpreter call with "searchpath", pass the correct parameters and it will find all your module imports, if they are not local, it will look them up in Internet module directories you specify, retrieve the relevant modules, put them on your path and then run your compiler/interpeter with the appropriate commandline options. e.g. searchpath ghc MyModule.hs OtherModule -isomeDir --internet localModules.map -iotherdir --internet https://hackage.org/modules.map -iotherDir2 -cache cache will result in the download of all the modules it can find each directory and execute: ghc MyModule.hs OtherModule -isomeDir --icache/internet1 -iotherdir -icache/internet2 -iotherDir2 Mapfiles have the form: #comment HAppS http://happs.org #comment *.*.HaXml.* http://www.cs.york.ac.uk/fp/darcs/HaXml/src #another comment Simon, I am now handling import chasing, Haskell extensions, literate haskell, latex haskell, and HTTPS URLs (via curl). It works with the ghc command line, except that it doesn't handle multiple directories passed on the same -i. (Can someone tell me what the official path separator is for ghc -i?). It doesn't yet work with other compilers/interpreters, but should be easy to adapt. It does not handle package files right now. Request: This will work better as people expose their modules via http(s). If your modules are visible this way, please email me a baseURL. Location: http://happs.org/SearchPath.hs -Alex- __________ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Re: Proposal: Relative Module Imports
On Tue, 3 May 2005, Samuel Bronson wrote: Maybe something like from Text.HaXML.XML import (Types, Escape, Pretty) would be nice. The problem with this one is that you need a way to express all the other stuff in import statements like "qualified" or "as", the imported list, etc. If you don't like the dots and are willing to deal with having to type the current module hierarchy twice, a more verbose syntax would be ProposalTranslation --- module Foo.Bar.Baz.Bing where module Foo.Bar.Baz.Bing where from Foo.Bar.Baz import Blip import Foo.Bar.Baz.Blip as Blip from Text.HaXML.XML import Types import Text.HaXML.XML.Types as Types import Escapeimport Text.HaXML.XML.Escape as Escape Not as tight as the prior syntax I proposed, but more readable and still a large improvement on the status quo. Thoughts? -Alex- __________ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com On Tue, 3 May 2005, Samuel Bronson wrote: On 5/3/05, S. Alexander Jacobson alexjacobson.com> wrote: Problem: We need a way to simplify module imports. Problem details: * Hierarchical module names are getting really long (including a functional area, a package name, and a module name). * People typically import multiple modules from areas close to each other in the hierarchical module namespace (especially in the case of intra-package imports). * Long module names are required even for non-exposed modules because a program may contain only one module with a given name (regardless of its visibility). Idea: Allow module relative imports in a manner that does not break any existing code. This would be really nice, but I'm not sure I like the way you propose to do it. Those bare dots don't look very nice to me, and I really don't like the idea of having to count them... I'm not even sure I like the idea of imports relative to the current module. I almost want "import Text.HaXML.XML.{Types,Escape,Pretty}", but not quite. And that would not be nice for qualified imports, anyway. Maybe something like from Text.HaXML.XML import (Types, Escape, Pretty) would be nice. -- Sam ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] Proposal: Relative Module Imports
Problem: We need a way to simplify module imports. Problem details: * Hierarchical module names are getting really long (including a functional area, a package name, and a module name). * People typically import multiple modules from areas close to each other in the hierarchical module namespace (especially in the case of intra-package imports). * Long module names are required even for non-exposed modules because a program may contain only one module with a given name (regardless of its visibility). Idea: Allow module relative imports in a manner that does not break any existing code. Proposal: * Use preceding dots to indicate that module name is relative * Use from keyword to specify a different relative base. Example: Dot relative syntax Translation --- --- module Text.Space.Foo.M where module Text.Space.Foo.M where import .M2 import Text.Space.Foo.M2 as M2 import ..Bar.Bazimport Text.Space.Bar.Baz as Bar.Baz import Data.Set import Data.Set from ...HaXML.XML import .Types import Text.HaXML.XML.Types as Types import .Escapeimport Text.HaXML.XML.Escape as Escape import .Prettyimport Text.HaXML.XML.Pretty as Pretty I believe that the proposed syntax is much more concise and readable than the current equivalent. -Alex- __ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] How to make Haskell more popular
FYI Perl 6 is being implemented in Haskell (in <4k of code!), so you can do both! Project: http://pugscode.org/ Interview: http://www.perl.com/pub/a/2005/03/03/pugs_interview.html -Alex- On Fri, 1 Apr 2005, Sebastian Sylvan wrote: On Apr 1, 2005 12:28 PM, Jon Fairbairn <[EMAIL PROTECTED]> wrote: 1) If another language has a feature, add it to Haskell, so ... Bah! Why don't you just use Perl! :-) /S -- Sebastian Sylvan +46(0)736-818655 UIN: 44640862 ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell ______ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Stop untracked dependencies! (was RE: mapping module identifiers to URLs (was RE: [Haskell] URLs in haskellmodule namespace))
A major problem in every major language I have ever used is module systems that make it all too easy for untracked dependencies to creep silently into code. The result has repeatedly been nasty surprises when moving code between machines. A Haskell example of this problem might look something like this: * Oh, Network.HTTP is non-standard!? Thats wierd. * Uh Oh. Google reveals at least three different implementations... * Now, how do I determine which one is installed here? * Is there a local copy of the package somewhere? * No? Ok, is it available for download somewhere? * Uhm, is the version I am using still available for download? * Oh, we made local changes too!? What were they? * Ok, we added Network.Socket.SSL. That's standard right? [rinse/repeat] [A lot of detective work later...] * Ok, now we've figured out all the packages, how do we ship them? [shipping strategy devised...] * But what happens if I don't have root/admin on the target machine? [...] * Oh, this package conflicts with something already installed? [etc.] It is true that Cabal's Build-Depends doesn't do everything you want it to, but your proposal also duplicates some of the functionality of Build-Depends, and makes it so that there are two places where dependencies are stored. We should only have to specify dependencies once. I agree you should only have to specify dependencies once. But, deferring dependency tracking to the point you are ready to ship (with Cabal or otherwise) is a recipe for disaster. Every code dependency should be explicit and verified at compile time. You should be able to to look at any block of code at any time and be able to produce the list of packages required to make it run. Strawman proposal 2: Modify the import syntax to allow package identifiers: import qualified HaXML HaXML.XML.Parse import HAppS HAppS.ACID import Personal MyLib Define a packages file format to resolve package ids to locations: http://haskell.org/HaXML/HaXML.hkg";> http://haskell.org/HUnit2"/> http://parser.org/parse.hkg"/> http://haskell.org/HAppS.hkg"; /> http://oldversion"; to="http://newversion"; /> Add a -p option to the compiler pointing to a packages file and use the content of that file to resolve module ids to implementations. The compiler MUST give an error if the user tries to associate the same module id with two different implementations [no grafting/module relativity!] Remove the -i compiler option. Any import of a module not in the local path must have a package id. [Get rid of import path order/overlap ambiguity!] Replace Cabal's Build-Depends field label with a Packages field label that takes a path to a packages file as a value. Add option to include a copy of all packages used by the current package. Give error if modules from other packages give conflicting module implementations. Allow user to resolve conflict in imported package files by using remap and redirect elements in the packages file. I don't think it's necessary to specify dependencies with module granularity. This just increases the number of possible error cases, without adding functionality. If I use multiple packages that export the same module identifier, I need a way to specify which one I want to use. Haskell's existing packaging model doesn't let met do that easily. - Package dependencies in Build-Depends could be specified using URLs. But then dependencies aren't checked at compile time and you can't specify which modules come from which packages. - Cabal could download, build, and install dependencies without any user intervention. Download and build? Great! Install? No thank you! In a fundamental sense, *installation* of third party libraries is exactly what I am opposing here. Library installation is what allows untracked dependencies to creep in. Users of executables definitely want a process for installation of executables in their local environment because they need a way to adjudicate potential conflicts between them (e.g. file associations). However, what users of libraries want is just a safe and easy way to populate a local cache of module-implementations from various sources and that is what I am asking for here. -Alex- __ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
mapping module identifiers to URLs (was RE: [Haskell] URLs in haskell module namespace)
My original goal in this thread was to explore a way to map module identifiers to the locations of module implementations that implementations can retrieve and compile with minimal user intervention. We got sidetracked into grafting, and I'd like to return to the original goal. The big open question is whether that mapping of imported module identifiers to retrieval locations is determined per import (like HTML links), per module (like HTML base tags), per build (think -i), per system (the registry), or centrally/federated like the DNS. Per system and Centrally/Federated feel like they involve too much bureacracy either through a local sysadmin or an IANA like entity (currently the libraries mailing list) and they give the programmer too little control over which packages they actually want to use. Per import and per module mapping seem like unjustified maintenance headaches absent module identifier relativity (a feature rejected in the 2003 grafting thread to which Simon referred [1] though Malcolm seems to be backtracking on the issue in in this thread) Therefore per build seems like the way to go. One implementation of per build is to extend -i to take URLs but that requires the implementation to query every URL on the search path for every import and that seems inefficient. I actually think -i is harmful in general because it makes it much harder to track dependencies. Here is a strawman proposal for replacing -i. The compiler/interpreter should accept a "Modules" file that maps thid party module identifiers to URLs at which source may be found. Here is a strawman file format: #moduleId url(s) Foo.Bar.* http://domain.com/package-v1.0.hkg http://domain.com/package-v2.0.hkg Foo.Bar http://domain2.com/package2.hkg The urls on any line enumerate all packages that have compatible implementations of the module identified by the module identifier on that line. Each imported packages may contain at most one Modules file. The implementation attempts to find module URL implementation agreement among all imported packages. If that fails, foreign Modules files are interpreted as being included at the line where they were imported. Later URLs for the same moduleId override earlier ones. The implementation should give a warning if third party packages give conflicting module locations. Note: Yes, I know about Cabal's Build-Depends, but it doesn't serve the need described here. -Alex- [1] http://www.haskell.org/pipermail/libraries/2003-September/001457.html __________ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com On Thu, 24 Mar 2005, Simon Marlow wrote: On 23 March 2005 13:11, Malcolm Wallace wrote: I think this will be trivially possible once the compilers support multiple versioning of packages. (Ghc may even support it already.): {-# OPTIONS -package foo-1.0 #-} module Old (module Foo) where import Foo {-# OPTIONS -package foo-2.2 #-} module New (module Foo) where import Foo module Convert where import qualified Old import qualified New convert (Old.Foo x y) = New.Foo y x We're not going to support this, at least for the forseeable future. It's a pretty big change: every entity in the program becomes parameterised by the package name as well as the module name, because module names can overlap. This means a change to the language: there might be multiple types called M.T in the program, which are not compatible (they might have different representations). You can't pass a value of type M.T that you got from version 1.0 of the package to a function expecting M.T in version 2. This issue came up in the thread about grafting from late 2003 on the libraries list (sorry don't have a link to hand). Cheers, Simon ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] URLs in haskell module namespace
On Tue, 22 Mar 2005, Malcolm Wallace wrote: Import statements should be allowed to include URL of Cabal packages. Module namespace in these import statements should be with respect to the package and not the local environment. e.g. these import statements allow us to import two different versions of Network.HTTP import http://domain.org/package-1.0.cabal#Network.HTTP as HTTP import http://hage.org/package-2.1.cabal#Network.HTTP as HTTP2 --note use of HTTP fragment identifier for module name I cannot see any of the Haskell compilers ever implementing this idea as presented. It would introduce an enormous raft of requirements (networking client, database mapping, caching, etc) that do not belong in a compiler - they belong in separate (preprocessing/packaging) tools. Furthermore, these tools already exist, albeit they are young and have a long maturation process still ahead of them. Ok, well lets unpack what is actually required here: 1. Change of syntax in import statements GHC already has lots of new syntax. 2. Module names with package scope GHC already has a -i. I assume the complexity of generating a -i w/r/t a notation provided in the import statment is not that high. 3. Networking Client I think GHC already bundles Cabal and Cabal already handles being a network client and doing some database mapping. (Lemmih: Please correct me if I am mistaken). Also, it is ridiculous for a modern language implementation NOT to have a network client library. 4. Caching Caching is new, but it is not that difficult to add to an extisting HTTP requester and the benefits seem well worth this marginal cost. 5. Maturation of Packaging Tools I agree that the packaging tools are immature. That is why it makes sense to evaluate this proposal now. No one has a big investment in the current packaging model and packaging tools optimized for a language that works in the way I propose would look very different from packaging tools organized for the pre-Internet world. Advantages: * Simplified place for user to track/enforce external dependencies. No, it spreads the dependency problem over lots of import statements, which will quickly become a maintenance headache when the URLs become invalid. Imagine a GUI project that uses, say, the GTK+ libraries in a hundred different import statements. Then the GTK server moves to a different URL. Disaster! Disaster? I don't think so. That is why purl.org exists. The HTTP 302 status code is your friend. If you don't want to use purl.org, feel free to set up your own redirect server. I imagine various different redirect servers operated by different people with different policies about what counts as a bug fix vs what counts as a new version, etc. And btw, it is a failure of Haskell right now that imports don't create dependency. Right now, I would like a sane way to import two different versions of the same module so I can do file conversion. It seems like the only way to accomplish this in Haskell as it stands is to rename one version and then I'm back in the world of global search and replace on import lines again. It would be MUCH niceer do this via packages URLs instead. It would be much better to group the dependencies into a single file per project - so there is just one place where changes need to be made. This possibility already exists - just create a .cabal file for the project. How do I depend on multiple versions of the same package in a single module? How do I make sure that my .cabal file is up to date with the actual content of my imports? I am proposing to automate this process. You appear to want to keep it manual. If you are using non-standard modules, you had better know where they came from and how to get another copy. This proposal provides a sane way to do that (see below). The Hackage project is exactly a database of package/location mappings, which the /author/ of each package can keep up-to-date, not the user. Much more maintainable. See my comment to Lemmih about the possibility of multiple hackage servers and needing to know locations on each of those servers. If Haskell allowed import of package URLs then a hackage server would just be one of many 302 servers (like Purl.org) and not require special plumbing. Note, you different people might have different judgements about what constistutes a bugfix vs a new version. You can't rely on the package author to agree with you! -Alex- __________ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] URLs in haskell module namespace
On Tue, 22 Mar 2005, Lemmih wrote: The idea with Hackage was to create a central place for people to put software or links to software, so keeping only one server for free (as in beer) packages would be desirable. However, this does in no way limit how Hackage can be used for private code repositories. So I assume that means you also think we need a way to locate packages on various repositories. Would you agree that URLs would be a good way of doing so? And, if there are multiple hackage servers, we are going to need to indentify the server from which a particular package originates and the location of that package on that server. This proposal provides an obvious method of doing so. Specifying sources on the cmd line or in /etc/cabal/sources.list sounds more maintainable to me. Except you then need to notice when you are no longer using a particular package and do bookkeeping. You also have no way of saying that one of your modules is using version X of a package while another is using version Y without command line specification. If module names were with respect to packages that would be entirely fine. But right now module names are global and that is a serious problem. But they are! GHC can even handle several versions of the same package. Modules from a package wont be in scope if you hide or ignore it. But suppose you want to use two different verions of the same package in a single modulePerhaps because you need to read a file saved with a show corresponding to an old version into a data structure defined in the new version... -Alex- __ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] URLs in haskell module namespace
Proposal restatement: Import statements should be allowed to include URL of Cabal packages. Module namespace in these import statements should be with respect to the package and not the local environment. e.g. these import statements allow us to import two different versions of Network.HTTP import http://domain.org/package-1.0.cabal#Network.HTTP as HTTP import http://hage.org/package-2.1.cabal#Network.HTTP as HTTP2 --note use of HTTP fragment identifier for module name Advantages: * Simplified place for user to track/enforce external dependencies. If you are using non-standard modules, you had better know where they came from and how to get another copy. This proposal provides a sane way to do that (see below). * It makes it easy to move code between machines. The implementation takes care of retreiving and building the packages automatically and as necessary. There is no need for a separate retrieve/build/install cycle. * Eliminates the horrible globality of Haskell's module namespace You can use two modules with the same name and different functionality and you can use two modules that use different versions of the same module. (see below). * Users no longer need to think about package installation scope. Package installation is with respect to the current use. Whether multiple users are able to share the same installation is up to the installation. User's can't infest the machine's local namespace by adding new packages. On Tue, 22 Mar 2005, Lemmih wrote: 1. knowing the source package for each module used in their code even if they didn't insall the packages in the first place i.e. import Foo.Bar just worked on my development machine. I'm not sure I completely understand what you're saying but knowing the exact URL for every single module import seems more of a hassle than installing a few packages. You could perhaps even make a shell script containing 'cabal-get install package1 package2 ...'. I am assuming that I may want to move my code to another machine and that therefore I need to keep a record *somewhere* of the source package of every module I actually use. If I don't, then moving will be much more difficult. Yes, keeping track of these packages is a hassle, but I don't see how it can be avoided. Once I am keeping track, the *somewhere* that it makes the most sense to me to do so is the point in the code where I am importing the module. That way the implementation can enforce correspondence and if I stop using the module, the package dependency automatically vanishes. Doing this sort of work in a separate script strikes me as a maintenance headache and means that all modules I use have to coexist in a shared namespace which seems likely to create more headache. 2. knowing the current location of those packages even if they didn't obtain them for installation on the original machine where they used them and aren't on the mailing list for them. I assume you meant something like "The developer don't know where to find the packages". The location of the packages is irrelevant to the developer since it's handled by Cabal/Hackage. I don't understand. Are you saying that there will be only one Hackage server ever and it will have all information about all packages everywhere and that the location of this hackage server will be hard coded into every cabal implementation? If so, I find that vision incredibly unappealing. I believe there should/will be multiple hackage servers with carrying different hackages under control of different parties (e.g. a corporation might have one for its own private code). And, if there are multiple hackage servers, we are going to need to indentify the server from which a particular package originates and the location of that package on that server. This proposal provides an obvious method of doing so. And a big bonus here is we get a simple solution to the problem of Haskell's global module namespace. There was a problem with module name spaces? Wouldn't there only be a problem if two packages used the same module name for different functionality? Yes, and that happens whenever you have different modules using different versions of the same module and it also happens when two different authors both chose to name their libraries Network.HTTP or Text.XML.Parse. If module names were with respect to packages that would be entirely fine. But right now module names are global and that is a serious problem. -Alex- __ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] URLs in haskell module namespace
Lemmih, The current Haskell/Cabal module and packaging system is substantially annoying for the typical non-sysadmin end-user. In particular, if they move their code to another machine they have to do a bunch of different administrivia including: 1. knowing the source package for each module used in their code even if they didn't insall the packages in the first place i.e. import Foo.Bar just worked on my development machine. 2. knowing the current location of those packages even if they didn't obtain them for installation on the original machine where they used them and aren't on the mailing list for them. 3. going through the hassle of doing a "cabal-get install" for each of them once they have figured it all out. I'd rather have a system that takes care of 1-3 for me and just reports errors if particular modules are irretrievable. That being said, Cabal definitely solves a lot of problems that my original proposal left unaddressed (e.g. producing executables needed to build modules, handling C code, versioning?). Perhaps the correct answer is to import Cabal packages rather than haskell source e.g. import http://package.org/package-1.0.cabal#Foo.Bar as Baz import http://package.org/package-2.0.cabal#Foo.Bar as Baz2 --note use of HTTP fragment identifier for module name And a big bonus here is we get a simple solution to the problem of Haskell's global module namespace. Now module namespace is local to individual packages. If cabal also has a "cabal-put package MyPackage http://myhost.com/dir"; then we have a really simple and beautiful system for sharing libraries over the Internet as well! If the change of import syntax is blessed by the powers that be, would it be hard to adapt Cabal to work like this? -Alex- __________ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com On Tue, 22 Mar 2005, Lemmih wrote: On Mon, 21 Mar 2005 23:06:25 +0100, Sven Moritz Hallberg <[EMAIL PROTECTED]> wrote: Greetings Alexander, I have been thinking about something very much similar for some time. But: Am 21. Mrz 2005 um 21.47 Uhr schrieb S. Alexander Jacobson: As I move from machine to machine, it would be nice not to have to install all the libraries I use over and over again. I'd like to be able to do something like this: import http://module.org/someLib as someLib The extra complexity outstrips the gain since installing a package will soon be as easy as this: 'cabal-get install myPackage'. Checkout the Cabal/Hackage project. I'm not sure a URL is the right thing to use. For instance, what about the http part? In the end, the URL gives a certain location for the module, which might change. Programs using the module should not become invalid just by movement of the dependency. If the requested module itself does local imports, the implementation would first try to resolve the names on the client machine and otherwise make requests along remote relative paths. If would be nice if implementations cached these http requests and did If-Modified-Since requests on each compile. If the document at the URL has been modified it might show the diff and ask the user if it is appropriate to upgrade to the new version. Exactly. I think, even, that this kind of handling is what we _need_. I routinely feel, in writing my own modules, the hassle of questions like "how do I package this?". It would be much easier and accessible to just put my modules up one by one on the Web, advertise them (by posting the documentation, preferably ;)) and know that people's GHC or whatnot will just auto-fetch them. This is exactly what Cabal and Hackage is solving. The next thought of course is versioning. To make sure my Haskell system gets the version I meant when I wrote my program, modules need version numbers. I'd propose the following. module A [1,5.2] (...) where ... The bracketed expression after the module name is an interval of interface numbers: This version of the module exports interface 5.2, the decimal indicating the second revision since no. 5. The module further declares to be backwards-compatible with all interfaces down to version 1, inclusively (i.e. they form a sequence of subsets). Nota bene this scheme is the same as that used by GNU libtool (although libtool explains it much too complicated). A module author would start with interface 1 (i.e. write [1,1]) and upon changing the module: - If the change was only a code revision with no interface or semantic changes at all, raise the fractional part, e.g. [1,1.1] - If there was any change in the module exports, or the semantics of existing exports, raise the interface number (upper bound) to the next integer, e.g. [1,2] - If the change broke compatibility with the last version (i.e. removed or changed any of the existing ex
Re: [Haskell] URLs in haskell module namespace
A few quick thoughts: 1 Although technically HTTP URLs are locations rather than identifiers, that is the behavior we want in this context. If you want to trust someone else to serve you the correct module, you should specify it. A formal spec should define exactly what URI schemes are supported. I would like support for HTTP and HTTPS. 2. Versioning is an issue independent of whether Haskell allows HTTP URLs as module locators. However if Haskell does end up with versioning AND HTTP support, it might make sense for it to use WebDAV versioning to access remote modules. 3. I love the concept of adapters. In particular, I'd really like a way to make sure that Prelude.read does not produce an error when the saved representation of a datatype differs from the current one. Manual management is a big PITA. (And yes, this too is orthogonal from the question of URLs in haskell module namespace.) -Alex- ______ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com On Mon, 21 Mar 2005, Sven Moritz Hallberg wrote: Greetings Alexander, I have been thinking about something very much similar for some time. But: Am 21. Mrz 2005 um 21.47 Uhr schrieb S. Alexander Jacobson: As I move from machine to machine, it would be nice not to have to install all the libraries I use over and over again. I'd like to be able to do something like this: import http://module.org/someLib as someLib I'm not sure a URL is the right thing to use. For instance, what about the http part? In the end, the URL gives a certain location for the module, which might change. Programs using the module should not become invalid just by movement of the dependency. If the requested module itself does local imports, the implementation would first try to resolve the names on the client machine and otherwise make requests along remote relative paths. If would be nice if implementations cached these http requests and did If-Modified-Since requests on each compile. If the document at the URL has been modified it might show the diff and ask the user if it is appropriate to upgrade to the new version. Exactly. I think, even, that this kind of handling is what we _need_. I routinely feel, in writing my own modules, the hassle of questions like "how do I package this?". It would be much easier and accessible to just put my modules up one by one on the Web, advertise them (by posting the documentation, preferably ;)) and know that people's GHC or whatnot will just auto-fetch them. The next thought of course is versioning. To make sure my Haskell system gets the version I meant when I wrote my program, modules need version numbers. I'd propose the following. module A [1,5.2] (...) where ... The bracketed expression after the module name is an interval of interface numbers: This version of the module exports interface 5.2, the decimal indicating the second revision since no. 5. The module further declares to be backwards-compatible with all interfaces down to version 1, inclusively (i.e. they form a sequence of subsets). Nota bene this scheme is the same as that used by GNU libtool (although libtool explains it much too complicated). A module author would start with interface 1 (i.e. write [1,1]) and upon changing the module: - If the change was only a code revision with no interface or semantic changes at all, raise the fractional part, e.g. [1,1.1] - If there was any change in the module exports, or the semantics of existing exports, raise the interface number (upper bound) to the next integer, e.g. [1,2] - If the change broke compatibility with the last version (i.e. removed or changed any of the existing exports), snap the lower bound up to reduce the interval to a single element again, e.g. [3,3]. import A 2 (...) The import statement includes a single integer interface number which is the number of the interface this module was written against. It indicates that any version of module A whose interface interval contains 2 is compatible. Obviously, the Haskell system should be able to provide some convenience for managing the interface numbers. It should also be possible to devise a smart way of handling omitted interface info (both on the ex- and import side). Finally, one will wish for a system of providing adaptor modules to interface old importers to new versions of their importees. That way, interfaces can be evolved rapidly because backwards-compatibility need not be retained, as long as one provides a suitable adaptor (to be auto-installed by an importing system). In such a setting, the simple "latest compatible interval" approach also becomes sufficient to handle even strong interface fluctuation because gaps can always be bridged with adaptors. Does this make sense? Cheers, Sven Moritz ___ Haskell mailing list Haske
[Haskell] URLs in haskell module namespace
As I move from machine to machine, it would be nice not to have to install all the libraries I use over and over again. I'd like to be able to do something like this: import http://module.org/someLib as someLib If the requested module itself does local imports, the implementation would first try to resolve the names on the client machine and otherwise make requests along remote relative paths. If would be nice if implementations cached these http requests and did If-Modified-Since requests on each compile. If the document at the URL has been modified it might show the diff and ask the user if it is appropriate to upgrade to the new version. Does this make sense? -Alex- ______ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] par and forkIO
Multicore and multi-cpu machines are getting more and more common. e.g. http://www.anandtech.com/systems/showdoc.aspx?i=2354&p=2 and http://www.electronicsweekly.com/articles/article.asp?liArticleID=38887&liArticleTypeID=1&liCategoryID=1&liChannelID=114&liFlavourID=1&sSearch=&nPage=1 Do the current implementations of par and forkIO in e.g. GHC take advantage multiple CPUs/cores or is special plumbing needed (GPH?)? -Alex- __________ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
instance Show Data.Map? (was Re: [Haskell] Data.Map missing keysFM_GE, eltsFM_GE?)
Ok, thanks. I also notice that Data.Map derives Show, but not Read and that the shown format is slightly annoying to parse because it creates this new pseudo-operator (:=). Given that you can't hide instance declarations, it would be better either not to implement Show at all, to implement it as (show.assocs), or to provide a Read instance. Any hope of the above, or do I need to refactor my code not to use (Read,Show) around Data.Map? -Alex- ______ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com On Mon, 21 Feb 2005, Tomasz Zielonka wrote: On Mon, Feb 21, 2005 at 02:39:54PM -0500, S. Alexander Jacobson wrote: Data.FiniteMap has functions keysFM_GE and eltsFM_GE which return the keys or elts of pairs greater than or equal to the provided key. These functions do not appear to be in the new Data.Map. Is this intentional? There is splitLookup :: (Ord k) => k -> Data.Map.Map k a -> (Data.Map.Map k a, Maybe a, Data.Map.Map k a) and toAscList :: Data.Map.Map k a -> [(k, a)] so you can write keysFM_GE and eltsFM_GE yourself, for example: eltsFM_GE k m = maybe [] return eq ++ Prelude.map snd (toAscList ge) where (_, eq, ge) = splitLookup k m in Best regards Tomasz ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] Data.Map missing keysFM_GE, eltsFM_GE?
Data.FiniteMap has functions keysFM_GE and eltsFM_GE which return the keys or elts of pairs greater than or equal to the provided key. These functions do not appear to be in the new Data.Map. Is this intentional? -Alex- __ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Proposal: Allow "\=" for field update in record update syntax
On Fri, 18 Feb 2005, Keean Schupke wrote: Its a pity template haskell cannot define infix operators, but you could use TH like: $update rec field fn which would expand to: rec { field = fn (rec field) } That doesn't help you so much if you want to update more than one field at a time. I think the best case syntax in this spirit is to define: x // f = f x infixl 6 // $(a field val) => \rec -> rec {field = val} $(u field fn) => \rec -> rec {field = fn (field rec)} Which we would then be able to use like this: fun rec = rec // $(u field1 fn) . $(a field2 val) But, I'm not sure that is superior to having the user explicitly derive u_ and a_ functions on the rectype which looks like this: $(deriveUpdate RecType) fun rec = rec // u_field1 fn . a_field2 val Aside: Why doesn't TH allow infix? Haskell doesn't allow operators that start with ':' so it would seem natural for TH to use those for infix. Then we could have e.g. (:=) and (:\=) and end up with a syntax that looks like this: rec // field :\= fn . field2 := val And that is BETTER than the Haskell syntax that I originally proposed. Are TH infix operators on the agenda? -Alex- ______ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com Keean. S. Alexander Jacobson wrote: I don't know what it takes to get this sort of change into circulation, but I assume it starts with a clear and specific description of the problem, exploration of alternatives, and a specific proposed solution. So here goes: -- Proposal: Allow "\=" for field update in record update syntax Specifically, intepret rec {field\=fn} as rec {field = fn (field rec)} Jusitification: It is extremely common when updating records to be updating a field value based on its prior value rather than simply assigning a new value from nothing. Unforunately the syntax supports the later clearly but makes the former awkward to read, understand, and debug: myFun db myType1 = db { tMyType1 = insert myType1 $ tMyType1 db nextId = 1 + (nextId db) } One solution to this problem that does not involve changing syntax is to generate field update functions for every field of a type e.g.: update_tMyType1 f s = s {tMyType1= f $ myType1 s} update_nextId f s = s {nextId = f $ nextId s} Once you have those functions, myFun looks much better: myFun db myType = update_nextId (+1) $ update_tMyType1 f db where f= insert myType1 But, generating the update_ functions is not programming; its just manual syntax generation. And, even if you have template haskell to do it, * it is annoying to have to find/write the TH code to implement * update_ is incredibly verbose * shortening it risks proliferation of psuedo-syntax: u_tMyType, u__tMyType, or utMyType? * it adds clutter to have to call the TH for each record type, and * it pollutes the name space/increases risk of name collision Rather than force the user to jump through these hoops, why not make the syntax flow naturally from the existing record update syntax? It makes sense to use \= for this task as it is the functional generalization of C or Java's += or *= . Then myFun would have the elegant and clear syntax: myFun db myType1 =db {tMyType1\=insert myType1, nextId\=(+1)} -Alex- ______ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] Proposal: Allow "\=" for field update in record update syntax
I don't know what it takes to get this sort of change into circulation, but I assume it starts with a clear and specific description of the problem, exploration of alternatives, and a specific proposed solution. So here goes: -- Proposal: Allow "\=" for field update in record update syntax Specifically, intepret rec {field\=fn} as rec {field = fn (field rec)} Jusitification: It is extremely common when updating records to be updating a field value based on its prior value rather than simply assigning a new value from nothing. Unforunately the syntax supports the later clearly but makes the former awkward to read, understand, and debug: myFun db myType1 = db { tMyType1 = insert myType1 $ tMyType1 db nextId = 1 + (nextId db) } One solution to this problem that does not involve changing syntax is to generate field update functions for every field of a type e.g.: update_tMyType1 f s = s {tMyType1= f $ myType1 s} update_nextId f s = s {nextId = f $ nextId s} Once you have those functions, myFun looks much better: myFun db myType = update_nextId (+1) $ update_tMyType1 f db where f= insert myType1 But, generating the update_ functions is not programming; its just manual syntax generation. And, even if you have template haskell to do it, * it is annoying to have to find/write the TH code to implement * update_ is incredibly verbose * shortening it risks proliferation of psuedo-syntax: u_tMyType, u__tMyType, or utMyType? * it adds clutter to have to call the TH for each record type, and * it pollutes the name space/increases risk of name collision Rather than force the user to jump through these hoops, why not make the syntax flow naturally from the existing record update syntax? It makes sense to use \= for this task as it is the functional generalization of C or Java's += or *= . Then myFun would have the elegant and clear syntax: myFun db myType1 =db {tMyType1\=insert myType1, nextId\=(+1)} -Alex- __________ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Job Posting (Looking for a few good functional programmers)
Yaron, would you mind sharing the reason your firm chose OCaml over Haskell for your applications? For others, I would love to organize an informal gathering of NYC Haskell programmers if there are any. If you are interested, please contact me and I'll try to make it happen. -Alex- ______ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com On Thu, 3 Feb 2005, Yaron Minsky wrote: Jane Street Capital (an affiliate of Henry Capital <http://henrycapital.com>) is a proprietary trading company located in Manhattan. The quantitative research department is responsible for analyzing, improving, and generating trading strategies. It's an open and informal environment (you can wear shorts and a t-shirt to the office), and the work is technically challenging, including systems work, machine learning, statistical analysis, parallel processing, and anything that crosses our path that looks useful. One unusual attraction of the job is that the large majority of our programming is done in OCaml. Pay is competitive, and we're a reasonably small company (around 85 employees), so advancement is pretty quick for someone who performs well. Here's what we're looking for: - Top-notch mathematical and analytic skills. We want people who can solve difficult technical problems, and think clearly and mathematically about all sorts of problems. - Strong programming skills. Pretty much all of our programming is in OCaml, so being a solid caml hacker is a big plus. But we're also interested in great programmers who we are convinced will be able to pick up OCaml quickly, so anyone with a high-level of proficiency with functional languages could be a good match. - Strong Unix/Linux skills --- We're looking for someone who knows their way around the standard unix tools, can write makefiles, shell scripts, etc. We use a beowulf cluster for compute-intensive jobs, so experience programming for and administering clusters is a big plus. If you're interested (or have any students you think might be a good match) and would be willing to relocate to New York, please send a cover-letter and resume to: [EMAIL PROTECTED] ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] Re: [Haskell-cafe] field record update syntax
(Moved to Haskell list because this is now a suggestion for the language) I do a lot of this soft of thing. foo {bar = fn $ bar foo ,baz = fn2 $ baz foo } It would be much nicer if this syntax did the equivalent: foo {bar \= fn ,baz \= fn2 } -Alex- On Thu, 27 Jan 2005, Henning Thielemann wrote: On Thu, 27 Jan 2005, S. Alexander Jacobson wrote: I have a lot of code of the form foo {bar = fn $ bar foo} Is there a more concise syntax? I am thinking the record equivalent of C's foo+=5... I imagine there is some operator that does this e.g. foo {bar =* fn} But I don't know what it is... If you have only few different record fields you may like to define an update function for each record field. updateBar fn foo = foo {bar = fn (bar foo)} ______ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] pattern matching accross instance declarations
My point is that I want to be able to add new toMyType definitions without having to modify some central library file and I want the compiler to warn me when I haven't defined one for each instance. -Alex- On Thu, 2 Dec 2004, Ben Rudiak-Gould wrote: > S. Alexander Jacobson wrote: > > > data MyExistantialType=forall v.(Show v)=>EType v > > > > class MyExistentialTypeable a where > >toMyType::String->MyExistantialType > > > > instance MyExistentialTypeable String where > >toMyType "String" = EType "foo" > > > > instance MyExistentialTypeable Int where > >toMyType "Int" = EType "bar" > > > > tVal (EType v)=show v > > main = print $ tVal $ toMyType "String" > > The type class isn't necessary here, since toMyType has the same type in > every instance anyway (namely String->MyExistentialType). You can simply > write > >data MyExistantialType=forall v.(Show v)=>EType v > >toMyType "String" = EType "foo" > toMyType "Int" = EType "bar" > >tVal (EType v)=show v >main = print $ tVal $ toMyType "String" > > -- Ben > __ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] why no strictness annotations in labelled fields?
Is there a good reason one can't do: data Foo = Foo {bar::!String} -Alex- ______ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] pattern matching accross instance declarations
Doing read/show on existential types would be a lot easier if pattern matching worked *accross* instance declarations and not only within them For example, this code produces an "Ambiguous type variable" GHC error, but it would be really helpful if it didn't: data MyExistantialType=forall v.(Show v)=>EType v class MyExistentialTypeable a where toMyType::String->MyExistantialType instance MyExistentialTypeable String where toMyType "String" = EType "foo" instance MyExistentialTypeable Int where toMyType "Int" = EType "bar" tVal (EType v)=show v main = print $ tVal $ toMyType "String" I know Haskell has a general prohibition on defining functions in multiple locations, but instance declarations already violate it. Why not go all the way? My current approach is manually writing lookup functions in addition to per-instance functions and that is both annoying and error-prone. -Alex- __ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] Re: understanding HaXml and escaping
Followup: You have a Dtd2Haskell tool, but DTDs are very inexpressive and the result is string based types. Is there a tool for converting e.g. relaxNG into haskell types? Then we get direct support for e.g. doubles or URLs. I know there is a relaxNG validator written in haskell, but that is not the same thing... -Alex- http://www.thaiopensource.com/relaxng/derivative.html On Thu, 28 Oct 2004, Malcolm Wallace wrote: > "S. Alexander Jacobson" <[EMAIL PROTECTED]> writes: > > > Is there a good entry point into HaXml? > > I've now spent some time trying to understand it > > and feel like I've gotten nowhere. > > It is a large package with many diverse facilities, so I'm not > surprised. I take it you have read the ICFP'99 paper linked to from > the HaXml webpage? To give a fuller answer, it would be helpful to > know more about your specific XML needs. > > > The Haddock documentation enumerates what each > > function does, but I still don't know how to > > produce a valid XML document? > > Where does your document come from? Has it been parsed already, > then manipulated, and you want to spit it out again? Or are you > trying to generate a fresh document from nothing? Or perhaps you > have some existing Haskell data-structure you want to convert to XML > for external representation only? > > > For example, this is obviously the wrong way to > > go: > > > > simp2 = document $ Document (Prolog Nothing [] Nothing []) [] $ > > Elem "root" [("attr",AttValue [Left "v\"al"])] > > [CString False "<<<<<>>&&&"] > > > > Because, it produces the obviously wrong: > > > > <<<<<>>&&& > > Ah. Escaping of special characters within text is a separate > issue. It need only be done once, just before output. > See Text.XML.HaXml.Escape - specifically you want something like > > simp2 = document $ Document (Prolog Nothing [] Nothing []) [] $ > xmlEscape stdXmlEscaper $ > Elem "root" [("attr",AttValue [Left "v\"al"])] >[CString False "<<<<<>>&&&"] > > > I assume/hope that the combinators properly > > encode/escape attribute values and CDATA, > > No, at the moment they don't. You can always do it one-shot at the > end, as in the example above, although it would probably be better > from a correctness point of view if the combinators did as you suggest. > > > And once I've done so, is there a way to put PIs > > in via the combinators > > Currently, there are no combinators specifically for generating PIs > (simply because no-one has asked for them before), but it would be > extremely easy to add. For instance: > > mkPI :: String -> String -> CFilter > mkPI pitarget str = \t-> [ CMisc (PI (pitarget,str)) ] > > Regards, > Malcolm > __ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] Re: understanding HaXml and escaping
On Thu, 28 Oct 2004, Malcolm Wallace wrote: > It is a large package with many diverse facilities, so I'm not > surprised. I take it you have read the ICFP'99 paper linked to from > the HaXml webpage? To give a fuller answer, it would be helpful to > know more about your specific XML needs. It would be helpful to have an example of using combinators to take a value of type e.g. import Network.URI import System.Time data Foo = Foo {uri::URI, name::String, stamp::ClockTime} and produce e.g. a string such as http://mynamespace"; uri="http://someURL"; name="blah" stamp="Mon, 11 Oct 2004 09:05:31 -0400"/> Note, I think I've figured out how to do this using raw types (after modifying to allow PIs in the prolog!), but I assume using the combinators is more correct. > Ah. Escaping of special characters within text is a separate > issue. It need only be done once, just before output. > See Text.XML.HaXml.Escape - specifically you want something like > > simp2 = document $ Document (Prolog Nothing [] Nothing []) [] $ > xmlEscape stdXmlEscaper $ > Elem "root" [("attr",AttValue [Left "v\"al"])] >[CString False "<<<<<>>&&&"] Yes, thank you. I figured this much out after posting the last mail. It would have been nice if the documents made the use of both document and xmlEscape more clear at the beginning. But, I actually think the document function should *automatically* implement xmlEscape. Otherwise, it is too easy for the user to produce incorrect documents notwithstanding what the library claims to promise! > > I assume/hope that the combinators properly > > encode/escape attribute values and CDATA, > > No, at the moment they don't. You can always do it one-shot at the > end, as in the example above, although it would probably be better > from a correctness point of view if the combinators did as you suggest. Actually I think the one shot at the end is correct. Everything else is manipulation of the data model and therefore shouldn't worry about escaping. But, as I said above, the user should not, by default, have to worry about escaping. > > And once I've done so, is there a way to put PIs > > in via the combinators > > Currently, there are no combinators specifically for generating PIs > (simply because no-one has asked for them before), but it would be > extremely easy to add. For instance: > > mkPI :: String -> String -> CFilter > mkPI pitarget str = \t-> [ CMisc (PI (pitarget,str)) ] Will this work to insert a PI before the first element? -Alex- __ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] understanding HaXml and escaping
Is there a good entry point into HaXml? I've now spent some time trying to understand it and feel like I've gotten nowhere. The Haddock documentation enumerates what each function does, but I still don't know how to produce a valid XML document? For example, this is obviously the wrong way to go: simp2 = document $ Document (Prolog Nothing [] Nothing []) [] $ Elem "root" [("attr",AttValue [Left "v\"al"])] [CString False "<<<<<>>&&&"] Because, it produces the obviously wrong: <<<<<>>&&& I assume/hope that the combinators properly encode/escape attribute values and CDATA, but can't figure out how to generate even the simple XML above. And once I've done so, is there a way to put PIs in via the combinators or do I have to import Types and risk have unescaped stuff in my document? -Alex- On Thu, 28 Oct 2004, Malcolm Wallace wrote: > "S. Alexander Jacobson" <[EMAIL PROTECTED]> writes: > > > I modified the Prolog type to be > >data Prolog = Prolog (Maybe XMLDecl) [Misc] (Maybe DocTypeDecl) [Misc] > > and then modified the Prolog parser > > Thanks for spotting this bug and providing a fix. I also note that > the XML spec allows "misc*" to follow the document top-level element: > > document ::= prolog element Misc* > > and this too is incorrect in HaXml. There may well be other > occurrences of the same omission. > > > Given that this fix was so very easy and given > > that the parser was already spec consistent, I now > > have to assume that there was good reason for the > > Prolog to be spec inconsistent, but I don't know > > what it is... > > I originally assumed that Misc's were unimportant and could be > discarded, like comments are discarded by a compiler. I failed to > notice that PI's should be passed through to the application. > > > Implementation question: Why is there so much > > replicated code in HaXML/Html (parse.hs and > > pretty.hs) > > The HTML parser does some correction of mal-formed input, which > is not otherwise permitted by the XML spec. Likewise, the HTML > pretty-printer makes some wild and unjustified assumptions about the > way that humans like to format their documents, whereas the XML pp > is more strictly-conforming. Once XHTML becomes common, the HTML > parser/pp will be obsolete. > > Regards, > Malcolm > __ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] HaXML incorrect interpretation of XML spec!
I modified the Prolog type to be data Prolog = Prolog (Maybe XMLDecl) [Misc] (Maybe DocTypeDecl) [Misc] and then modified the Prolog parser (which actually was correct) to actually use the misc values in constucting the Prolog. I replace the prolog function in pretty.hs with prolog (Prolog x m1 dtd m2) = maybe xmldecl x $$ vcat (map misc m1) $$ maybe doctypedecl dtd $$ vcat (map misc m2) and did a few more similar things in Haskell2XML and XML2Haskell (about a 5 min operation) and it all seems to work properly. Implementation question: Why is there so much replicated code in HaXML/Html (parse.hs and pretty.hs) Given that this fix was so very easy and given that the parser was already spec consistent, I now have to assume that there was good reason for the Prolog to be spec inconsistent, but I don't know what it is... Thoughts? -Alex- On Wed, 27 Oct 2004, S. Alexander Jacobson wrote: > The XML spec defines the prolog(1) as follows: > > [22]prolog ::= XMLDecl? Misc* (doctypedecl Misc*)? > > In other words you can have Misc before AND after > the doctype declaration! > > HaXml defines the prolog(2) as: > > data Prolog = Prolog (Maybe XMLDecl) (Maybe DocTypeDecl) > > HaXml therefore does not allow PIs before the > beginning of the top level element. This is a > problem in practice for people who want to use > e.g. XML-Stylesheets(3) where if one is using XSL > to produce HTML it is necessary to put > the stylesheet declaration BEFORE the root element > (if the root element is not HTML). e.g. > > > > > > Is there some way to puta PI before the > root element in combinators? > > -Alex- > > (1) http://www.w3.org/TR/REC-xml/#sec-prolog-dtd > (2) http://www.cs.york.ac.uk/fp/HaXml/HaXml/Text.XML.HaXml.Types.html#Prolog > (3) http://www.w3.org/TR/xml-stylesheet/ > __ > S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com > ___ > Haskell mailing list > [EMAIL PROTECTED] > http://www.haskell.org/mailman/listinfo/haskell > __ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] HaXML incorrect interpretation of XML spec!
The XML spec defines the prolog(1) as follows: [22] prolog ::= XMLDecl? Misc* (doctypedecl Misc*)? In other words you can have Misc before AND after the doctype declaration! HaXml defines the prolog(2) as: data Prolog = Prolog (Maybe XMLDecl) (Maybe DocTypeDecl) HaXml therefore does not allow PIs before the beginning of the top level element. This is a problem in practice for people who want to use e.g. XML-Stylesheets(3) where if one is using XSL to produce HTML it is necessary to put the stylesheet declaration BEFORE the root element (if the root element is not HTML). e.g. Is there some way to puta PI before the root element in combinators? -Alex- (1) http://www.w3.org/TR/REC-xml/#sec-prolog-dtd (2) http://www.cs.york.ac.uk/fp/HaXml/HaXml/Text.XML.HaXml.Types.html#Prolog (3) http://www.w3.org/TR/xml-stylesheet/ __ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] Haskell XML
I am looking at trying to produce XML with Haskell and it appears there are at least two dominant options: HaXML and WASH. I read some of the Peter Thiemann paper on the reason he chose a different architecture from HaXML but don't feel like I am in a position to judge the merits (and given the age of the paper) am not sure those issues weren't addressed in a subsequent version of HaXML. In reading the HaXML documentation, it appears to rely on DRIFT for some of its functionality. But I also undersand that DRIFT is being phased out in favor of TemplateHaskell. Is there a consensus on how to produce XML (from Haskell types) if one is starting a project today? -Alex- ______ S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
RE: [Haskell] modern language design, stone age tools
Hmm, those options don't work with ghci (where you are more likely to be debugging). Also, is there a way to get the typesystem to tell you which functions may fail i.e. which functions have failMsg as an implicit parameter? -Alex- _____ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com On Thu, 24 Jun 2004, Simon Peyton-Jones wrote: > Switching on profiling (-prof -auto-all) does essentially just this. The > cost centre stack is just like an implicit parameters, only one that > does not show up in the types. > > Since there really is some extra parameter passing going on, you really > do need to recompile (all) the code. That's a nuisance, but it's a > price that I can't see how to avoid in a call-by-need language. > > Simon > > | -Original Message- > | From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > On Behalf Of Alastair Reid > | Sent: 23 June 2004 21:44 > | To: S. Alexander Jacobson > | Cc: [EMAIL PROTECTED] > | Subject: Re: [Haskell] modern language design, stone age tools > | > | On Wednesday 23 June 2004 20:38, S. Alexander Jacobson wrote: > | > It would be really nice if you could pass an > | > error message down to every function that might > | > fail. e.g. using implicit parameters*: > | > > | >myFunc 0 x = head x with ?failMsg="myfunc 0 caused the error" > | > | Interesting. Two variations on that: > | > | 1) Getting a 'stack' trace (i.e., a list of ancestors) can be more > |useful than just knowing the immediate ancestor so you might want > |myFunc to also take an implicit parameter and to add that > |parameter to what it says: > | > | myFunc 0 x = head x > | with ?failMsg="myfunc 0 caused the error but myFunc was called > because " > |++ ?failMsg > | > |[I can't remember if with acts as a let or a letrec. The intention > is > | to use the ?failMsg passed implicitly to myFunc] > | > | 2) If you don't want to put errors in the type system, you could > instead use > |exceptions something along the lines of: > | > | myFunc 0 x = mapException > | (\ err -> show err ++ "when invoked by myFunc 0") > | (head x) > | > |[Not quite type correct but hopefully clear enough. The idea is to > combine > | the exception value returned with some extra information about the > context > | with the idea that whoever called myFunc might add extra > information. > | Ideally, the Exception type would be recursive so we could build > chains of > | exceptions without having to use Show.] > | > | -- > | Alastair Reid > | ___ > | Haskell mailing list > | [EMAIL PROTECTED] > | http://www.haskell.org/mailman/listinfo/haskell > ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] modern language design, stone age tools
Thank you for the programming practice recomendation, but I would still prefer to have something like the implicit parameters solution I described. The solution you describe forces you to put code in functions to handle cases that are outside its domain. Usually I just want to know what function are calling with the pathological input. e.g. quadratic a b c = read $ show ((-b + root)/2/a),(-b - root)/2/a) where root = sqrt(b*b - 4*a*c) I want to know which function is passing in things to make the root imaginary or a==0. I don't want to rewrite this function so it handles these exceptions explicitly. -Alex- _____ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com On Wed, 23 Jun 2004, MR K P SCHUPKE wrote: > I like to write programs so functions cannot fail... > so head should really be: > > maybeHead :: [a] -> Maybe a > maybeHead (a:_) = Just a > maybeHead _ = Nothing > > etc... > > The the calling function can do something like: > > case maybeHead x of > Just y -> > Nothing -> fail "failed in..." > > This way you can pass the failure back up to somewhere where > its meaningful. > > In my opinion you should either be encoding failure in the > return type, or using exceptions... anything else is just > bad coding. > > Finally use guards (that can be conditionally compiled out) > before functions that might fail: > > if null x then fail "sesible error message" else ... > > Keean. > ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] modern language design, stone age tools
It would be really nice if you could pass an error message down to every function that might fail. e.g. using implicit parameters*: myFunc 0 x = head x with ?failMsg="myfunc 0 caused the error" Head would be defined as e.g. head [] = fail $ "empty list.\n" ++ ?failMsg head (x:xs) = x It would be even nicer if lineNumber was automatically packed in the failMsg so the output of myFunc [] would be Prelude.head: emptyList. [Line ???] MyModule.myFunc: "myfunc 0 caused the error" [Line 50] But, just being able to pass a msg would make debugging LOADS easier (eliminating much of the need for a debugger to supply a stack trace). Since implicit parameters are part of the type system, it would be even cooler to be able to identify all the functions in your code that may fail (i.e. that carry ?failMsg). You can then target your debugging on those functions. Does this make any sense? -Alex- * I've never actually used implicit parameters. I just swiped the syntax I saw skimming: http://www.cse.ogi.edu/~mbs/pub/implicit_parameters/ _________ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com On Wed, 23 Jun 2004, Fergus Henderson wrote: > On 23-Jun-2004, Hal Daume III <[EMAIL PROTECTED]> wrote: > > On Wed, 23 Jun 2004, Fergus Henderson wrote: > > > > > On 23-Jun-2004, MR K P SCHUPKE <[EMAIL PROTECTED]> wrote: > > > > This may not be the right answer to the question (which is of > > > > course lets write a debugger) - But I have never used a debugger, > > > > and find them more or less the most unfriendly and useless things > > > > > > So how do you debug problems like "Prelude.head: empty list" > > > in large programs? > > > > Wasn't addressed to me, but here's what I do: > > > > write the following function: > > > > head_ x [] = error ("head_: " ++ show x) > > head_ _ l = head l > > > > and then replace each occurance of "head" with "head_ 1" or "head_ 2" > > etc., so I can know where it failed. > > Well, there are quite a lot of such occurrences in the code that I'm working > on: > > bash$ find . -name \*.hs -o -name \*.lhs | xargs grep -w head | wc -l > 130 > > Replacing all of those occurrences by hand is going to be very very > tedious and somewhat time-consuming. Doing it with a script would be > better, but that's not a trivial task. > > Even once that is done, there's no guarantee it will actually help to > find the problem. After all, the problem might well be arising from a > call to "head" in one of ghc's standard libraries: > > bash$ find ~/ghc6-6.2/hslibs \*.hs -o -name \*.lhs | xargs grep -w head | wc -l > 104 > > So not only do I have to edit my own code, and the libraries written by > my colleagues, I also need to edit the ghc library code, and figure out > how to build and reinstall the ghc libraries. That could take a long time. > > After all that, hopefully I will finally know which function called > "head" with an empty list. But even then there's still no guarantee > that I've actually found the source of the problem; the real problem > might be in that function's caller, or the caller's caller, etc. So I > might have to go through the whole process again. > > > (it is rather sad that this is the best approach i could come up > > with...basically tries to get around the [lack of/uselessness of/inability > > to use with ghci] stack traces) > > Yes :-( > > -- > Fergus J. Henderson | "I have always known that the pursuit > Galois Connections, Inc.| of excellence is a lethal habit" > Phone: +1 503 626 6616 | -- the last words of T. S. Garp. > ___ > Haskell mailing list > [EMAIL PROTECTED] > http://www.haskell.org/mailman/listinfo/haskell > ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Re: sizeFM type
On Mon, 26 Apr 2004, Robert Will wrote: > I understand the Int type to be as large as pointers on each kind of > hardware. So one cannot possible have any data structure whose size > doesn't fit in an Int. If my data structure is created/destroyed lazily, I don't see why not. Trivially, [0..2^128]. Or, if my data structure maps to a file in some waySee my prior post on the annoyance of hFileSize vs take. > > > Moreover, it is not clear that the CPU/memory > > > overhead of returning Integer rather than Int for > > > sizeFM is sufficiently high to be worth bothering > > > the programmer about. > > Well, Int is built-in to any hardware, while Integer isn't. Since > Int/Integer is one of the most used data types in almost any program, the > difference would be _very_ big. I'll cite that hoary Hoare quotation here. "Premature optimization is the root of all evil." People who need high performance can use specialized int functions where it is truly necessary. Everywhere else it is a waste of programmer time to force futzing. -Alex- _ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
RE: [Haskell] Why is newChan in the IO Monad?
Yes, that makes sense, but I'm ok with passing in an identity. I'd like a function like this: newChanSafe::Identity -> Chan a type Identity = Double -- or whatever -Alex- _____ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com On Fri, 23 Apr 2004, Nick Benton wrote: > Channels have identity, so allocating a new one is a side effecting > operation. Having it outside the IO monad would require (for example): > > (newChan, newChan) = (let x = newChan in (x,x)) > > which is wrong. If you wrap newChan in unsafePerformIO then the compiler > will feel free to apply rewrites like the above, which is unlikely to be > what you wanted. > > Nick > -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > On Behalf Of S. Alexander Jacobson > Sent: 23 April 2004 19:22 > To: Haskell Mailing List > Subject: [Haskell] Why is newChan in the IO Monad? > > Nothing actually happens when newChan is called > except construction of a new datastructure. It > would be nice to have non IO monad code be able to > create a new Chan that gets passed to IO code that > uses it somewhere else. > > Alternatively, is there a way to create a Chan > outside the IO monad? > > -Alex- > > _ > S. Alexander Jacobson mailto:[EMAIL PROTECTED] > tel:917-770-6565 http://alexjacobson.com > ___ > Haskell mailing list > [EMAIL PROTECTED] > http://www.haskell.org/mailman/listinfo/haskell > ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] Why is newChan in the IO Monad?
Nothing actually happens when newChan is called except construction of a new datastructure. It would be nice to have non IO monad code be able to create a new Chan that gets passed to IO code that uses it somewhere else. Alternatively, is there a way to create a Chan outside the IO monad? -Alex- _ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] Announce: Haskell ACID Relational DBMS v0.1
In thinking about a data storage model for a web app I wanted to develop, and in finding Haskell so concise and expressive, I wondered if one could write a relational DBMS in Haskell in under 1000 lines of code. The answer appears to be yes! Features: * non-destructive-update Haskell DBMS (can use a relational database without escaping to the IO monad!) * supports user defined types * supports user defined relations and functions * command pattern structure for write-ahead logging * Inner,Outer,Left,Right joins on arbitrary (user-defined) relations (not just "=") * in-memory/in-process means no disk/marshalling overhead Risks include: * functions/aggregates not yet implemented e.g. (amailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com{-# OPTIONS -fglasgow-exts #-} {-- Haskell ACID Relational Database Management System v.01 Copyright (C) 2004 S. Alexander Jacobson This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. --} - {-- Assumptions: * All data fit in memory (at least in this version!) * We want an in-process Haskell DBMS * Atomicity via app level definition of transaction (e.g. prevayler.org) * Consistency via the app wrapper around database * Isolation via haskell's referential integrity * Durability via write-ahead logging of update requests and state serialization --} --see test example at the end!! import Maybe import Data.Set import Data.FiniteMap import Data.Typeable import List hiding (union,intersect) import Control.Monad import Random --Conceptual model: --A database is a set of records (tableId is just a property of a record.) --A record is a mapping from propertyIds to propertyValues with a unique identity. type RecordSet = Set Record type Record = FiniteMap PropId PropVal newtype RecordId= RecordId String deriving (Eq,Ord) newtype PropId = PropId String deriving (Eq,Ord,Show) newtype PropVal = PropVal {propVal::String} deriving (Eq,Ord,Show) --A value has meaning only w/r/t the type of its identifier (is "2" < "100"?) data PropTypeId = PropTypeId {ptName::String,ptArgs::PropTypeArgs} deriving (Eq,Ord) type PropTypeArgs = [String] --Abstract representation of database class DBImpl db => DBInterface db where --You need to define propId types before you assign them values in records! putPropId::db -> PropId -> PropTypeId -> db getPropId::db -> PropId -> Maybe PropTypeId delPropId::db -> PropId -> db indPropId::db -> FiniteMap PropId PropTypeId --SQLish interpretation of basic interface dbInsert::db-> [Record] -> ([RecordId],db) -- 201 created location dbSelect::db -> SelectExpr -> WhereExpr -> GroupBy -> OrderBy -> [Result] dbDelete::db -> Set SetId -> WhereExpr -> db dbUpdate::db -> UpdateExpr -> SelectExpr -> WhereExpr -> GroupBy -> db --support functions (default implementation can use these) dbInsertRecord::db -> Record -> (RecordId,db) dbWhereExpr:: db -> WhereExpr -> Set JoinedRecordIds dbGroupBy:: db -> GroupBy-> Set JoinedRecordIds -> Set (Set JoinedRecordIds) dbSelectExpr::db -> SelectExpr -> Set (Set JoinedRecordIds) -> ResultSet dbOrderByPairs:: db -> OrderBy-> ResultSet -> [(JoinedRecordIds,Result)] dbOrderBy::db -> OrderBy -> ResultSet -> [Result] -- dbSelect=defaultDBSelect dbInsert=defaultDBInsert dbDelete=defaultDBDelete dbUpdate=defaultDBUpdate dbOrderBy=defaultDBOrderBy {-- A relational database allows declarative manipulation of sets of records based on the relation of their property values to specified constants and on the relation of their property values to those of other records. --} type JoinedRecordIds = FiniteMap SetId RecordId --recordIds related in some way newtype SetId = SetId String deriving (Eq,Ord,Show) type Result = FiniteMap SetIdPropId (Maybe PropVal) --recordid is property of record type SetIdPropId = (SetId,PropId) type ResultSet = Set (JoinedRecordIds,Result) data SelectExpr
RE: [Haskell] overuse of maybe and [] in prelude and libs
> Actually Haskell doesn't let you redefine *everything*, but GHC does: > read section 7.3.5 of the GHC manual > http://www.haskell.org/ghc/docs/latest/html/users_guide/syntax-extns.htm > l#REBINDABLE-SYNTAX Mostly. It looks like you still can't redefine Monad to have an Ord or EQ constraint. In particular, it is essential that the functions GHC finds in scope must have the appropriate types, namely: (>>=) :: forall m a. (...) => m a -> (a -> m b) -> m b But, either way, I'm assuming I can't do this (making up syntax): import Data.FiniteMap using MyPrelude as Prelude So that every place in FiniteMap that Prelude functions are called, MyPrelude functions are called instead? Then I could simply redefine just and nothing as return and mzero. Aside: I assume that FiniteMap's lookupFM, like lookup in the Prelude calls Constructor literals (e.g. Just, Nothing) rather than functions that call these literals (e.g. just=Just, nothing=Nothing) so in practice this import function wouldn't help me as much as I would like. But, I wonder if exposing constructor literals is actually bad style. Should good libraries hide their constructors and expose only: * functions that implement construction (e.g. just) * functions that provide dispatch on each constructor (e.g. foldr, maybe, either, etc) Or is this too restrictive? Is the prelude good style even though it exports Just and Nothing? -Alex- _________ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com On Thu, 8 Apr 2004, Simon Peyton-Jones wrote: > > | > Haskell is pretty good about letting you install a different > Prelude, so you could try it yourself. > | > | Hmm. That's interesting! How does this work? > > It's very simple. Write your own Prelude as a Haskell module MyPrelude. > > Then to use it, say > module Foo where > import Prelude () > import MyPrelude > > (GHC lets you omit the 'import Prelude ()' by saying > -fno-implicit-prelude.) > > No, there is no auto-magic; you are simply getting a different library, > that is all. And you have to write that library. > > No problem with some modules using MyPrelude and some using Prelude. > (Any more than there's a problem when some modules import module A and > some import module B.) > > > > Simon > > ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Coginitive Dimensions and Haskell (was RE: [Haskell] overuse of maybe and [] in prelude and libs)
Also, out of curiosity (since you are at MSFT) are you using any formal Cognitive Dimension stuff in the design of Haskell or its libraries? http://www.gotdotnet.com/team/brada/describingandevaluatingapiusabilityatmicrosoft.ppt Your Excel paper is mentioned in the end-notes. http://research.microsoft.com/Users/simonpj/papers/excel/excel.pdf If so, is there a publicly available CD analysis of Haskell and its libs available? -Alex- _ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com On Thu, 8 Apr 2004, Simon Peyton-Jones wrote: > Alex > > You've become a very sophisticated Haskell programmer! > > We did at one stage do something like this, by making list comprehensions into monad > comprehensions. So > [ x*x | x <- xs, pred x] > meant the same as > do { x <- xs; if pred x then mzero else return (); return (x*x)} > > But in the next iteration of the language we undid the change, a controversial > decision that some still regret. Because naïve users were getting too many > perplexing error messages about monads and functors when they thought they were just > manipulating lists. > > Haskell is pretty good about letting you install a different Prelude, so you could > try it yourself. > > Simon > > | -Original Message----- > | From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of S. Alexander > | Jacobson > | Sent: 08 April 2004 04:42 > | To: Haskell Mailing List > | Subject: [Haskell] overuse of maybe and [] in prelude and libs > | > | It feels like the Prelude insists on using [] and > | Maybe too much. I keep writing things like this: > | > |foo = foldl union emptySet $ maybe mzero return $ lookup pairs key > |goo = maybe emptySet toSomething $ lookup pairs key > | > | which really should look like this: > | > |foo = concat $ lookup pairs key > |goo = fmap toSomething $ lookup pairs key > | > | But, even if we don't have a Monadic/Functor Set, > | foo should at least be: > | > |foo = foldl union emptySet $ lookup key > | > | In other words, shouldn't Prelude define concat > | and lookup as: > | > |concat = foldr mplus mzero -- (Also, see PS) > | > |lookup key [] = mzero > |lookup key ((x,y):xyz) > | | key == x = return y > | | otherwise = lookup key xyz > | > | And if it is a fundamental problem adding > | constraints to instances, why not add all > | automatically derivable classes as constraints to > | all the Prelude classes (esp. Monad and Functor!) > | and automatically derive instances of all > | derivable classes unless the programmer defines > | his/own methods. > | > | -Alex- > | > | PS Shouldn't concat be defined with foldl and not > | foldr? Doesn't foldr imply that you can't concat > | infinite lists? (I know this is a FAQ, but > | where?) > | > | _ > | S. Alexander Jacobson mailto:[EMAIL PROTECTED] > | tel:917-770-6565 http://alexjacobson.com > | ___ > | Haskell mailing list > | [EMAIL PROTECTED] > | http://www.haskell.org/mailman/listinfo/haskell > ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
RE: [Haskell] overuse of maybe and [] in prelude and libs
On Thu, 8 Apr 2004, Simon Peyton-Jones wrote: > You've become a very sophisticated Haskell programmer! Thank you, but I think it may only seem that way. I'll post a complete program up somewhere and they we'll see where I really stand. Among other things, I don't yet have a feel for idiom/style (e.g. when do you define classes rather than modules, when do use in default methods, in what order do you define types classes functions etc for maximum readability). > We did at one stage do something like this, by making list comprehensions into monad > comprehensions. So > [ x*x | x <- xs, pred x] > meant the same as > do { x <- xs; if pred x then mzero else return (); return (x*x)} > > But in the next iteration of the language we undid the change, a controversial > decision that some still regret. Because naïve users were getting too many > perplexing error messages about monads and functors when they thought they were just > manipulating lists. I guess I'm in the regret group. Nothing stops beginners from importing BeginnerUtils and using list typed functions with names like concatList (or implementors from writing better error message copy). But, ok... > Haskell is pretty good about letting you install a different Prelude, so you could > try it yourself. Hmm. That's interesting! How does this work? * Can this change propogate through the libs so that e.g. lookupFM also returns a Monad rather than Maybe without manual modification of all the libs? (Note: I am actually using lookupFM in my code, I used lookup in my example to simplify) * If I modify Monad in my prelude to have Ord, will do-notation work in my new monad class? Will the IO monad work? Can I make () an instance of Ord or EQ? * Can different modules use different Preludes? It seems like making a module dependent on a different Prelude means potential incompatibilities with third party modules. e.g. if I want to use HaXML and it hypothetically changes to define one Prelude and HaskellDB which hypothetically changes to define another Prelude, is there an easy way to import ONLY the functions defined in those modules and not all the functions defined in their respective Preludes. * Is there a formal definition of what in the Prelude is actually core to Haskell and what is really just common utilities located their? Alternatively, does an "advanced prelude" already exist that does what I probably want but don't know yet? -Alex- _ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com > Simon > > | -Original Message- > | From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of S. Alexander > | Jacobson > | Sent: 08 April 2004 04:42 > | To: Haskell Mailing List > | Subject: [Haskell] overuse of maybe and [] in prelude and libs > | > | It feels like the Prelude insists on using [] and > | Maybe too much. I keep writing things like this: > | > |foo = foldl union emptySet $ maybe mzero return $ lookup pairs key > |goo = maybe emptySet toSomething $ lookup pairs key > | > | which really should look like this: > | > |foo = concat $ lookup pairs key > |goo = fmap toSomething $ lookup pairs key > | > | But, even if we don't have a Monadic/Functor Set, > | foo should at least be: > | > |foo = foldl union emptySet $ lookup key > | > | In other words, shouldn't Prelude define concat > | and lookup as: > | > |concat = foldr mplus mzero -- (Also, see PS) > | > |lookup key [] = mzero > |lookup key ((x,y):xyz) > | | key == x = return y > | | otherwise = lookup key xyz > | > | And if it is a fundamental problem adding > | constraints to instances, why not add all > | automatically derivable classes as constraints to > | all the Prelude classes (esp. Monad and Functor!) > | and automatically derive instances of all > | derivable classes unless the programmer defines > | his/own methods. > | > | -Alex- > | > | PS Shouldn't concat be defined with foldl and not > | foldr? Doesn't foldr imply that you can't concat > | infinite lists? (I know this is a FAQ, but > | where?) > | > | _ > | S. Alexander Jacobson mailto:[EMAIL PROTECTED] > | tel:917-770-6565 http://alexjacobson.com > | ___ > | Haskell mailing list > | [EMAIL PROTECTED] > | http://www.haskell.org/mailman/listinfo/haskell > ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] overuse of maybe and [] in prelude and libs
It feels like the Prelude insists on using [] and Maybe too much. I keep writing things like this: foo = foldl union emptySet $ maybe mzero return $ lookup pairs key goo = maybe emptySet toSomething $ lookup pairs key which really should look like this: foo = concat $ lookup pairs key goo = fmap toSomething $ lookup pairs key But, even if we don't have a Monadic/Functor Set, foo should at least be: foo = foldl union emptySet $ lookup key In other words, shouldn't Prelude define concat and lookup as: concat = foldr mplus mzero -- (Also, see PS) lookup key [] = mzero lookup key ((x,y):xyz) | key == x = return y | otherwise = lookup key xyz And if it is a fundamental problem adding constraints to instances, why not add all automatically derivable classes as constraints to all the Prelude classes (esp. Monad and Functor!) and automatically derive instances of all derivable classes unless the programmer defines his/own methods. -Alex- PS Shouldn't concat be defined with foldl and not foldr? Doesn't foldr imply that you can't concat infinite lists? (I know this is a FAQ, but where?) _________ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] insufficiently [EMAIL PROTECTED] -- more counterintuitive stuff
Thanks for the ~ syntax, but my question is really why you need it? What benefit do you get from "refutable patterns"? Alternatively, would anything break if a future Haskell just treated all patterns as irrefutable? -Alex- _____ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com On Tue, 30 Mar 2004, Martin [ISO-8859-1] Sjögren wrote: > tis 2004-03-30 klockan 17.30 skrev S. Alexander Jacobson: > > I would assume that this function: > > > > foo list@(h:t) = list > > > > is equivalent to > > > > foo list = list > > where (h:t)=list > > > > But passing [] to the first generates an error > > even though h and t are never used! Passing [] to > > the second works just fine. > > You can write this as > > > foo' list@(~(h:t)) = list > > foo' [] will evaluate to []. The H98 report calls it an "irrefutable > pattern", IIRC. > > > Regards, > Martin > ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] insufficiently [EMAIL PROTECTED] -- more counterintuitive stuff
I would assume that this function: foo list@(h:t) = list is equivalent to foo list = list where (h:t)=list But passing [] to the first generates an error even though h and t are never used! Passing [] to the second works just fine. At this point, I sort of understand the reason for MR and not having partially applied type synonyms, but this seems entirely like an issue of syntactic sugar FYI, I encountered this issue attempting to write code that merges the content of two lists of tuples e.g. merge left@((xL,yL):restL) right@((xR,yR):restR) | left==[] = I imagine this issue is fairly common so perhaps its another idiom issue. -Alex- PS This code is part of a joinSets function for data.Sets _ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] Re: performance tuning Data.FiniteMap
On Fri, 27 Feb 2004 [EMAIL PROTECTED] wrote: > If indeed the read performance is at premium and updates are > infrequent, by bother with ternary etc. trees -- why not to use just a > single, one-level array. Given a reasonable hash function Because updates are not so infrequent that I want to pay the cost of replicating the entire array every update (or every ten!). I'm willing to exchange *some* read time for faster update. Also, because small array copies may be sufficiently faster than tree traversals that I may pay very little extra for faster reads. FYI, my current code looks like this: type HTArray base elt = Array base (HT base elt) data HT base elt = HT (Maybe (HTArray base elt)) (Maybe elt) data MyMap base key elt = ArrMap (HTArray base elt) (key->[base]) (HT base elt) newMap minBase maxBase toBase = ArrMap proto toBase emptyHT where proto= array (minBase,maxBase) [(x,emptyHT) | x<- [minBase..maxBase]] emptyHT=HT Nothing Nothing lookup (ArrMap _ toBase ht) key = lookup' ht $ toBase key lookup' (HT x y) [] = y lookup' (HT Nothing _) _ = Nothing lookup' (HT (Just ar) _) (k:ey) = lookup' (ar!k) ey insert (ArrMap proto toBase ht) key elt = ArrMap proto toBase newHT where newHT= insert' proto ht (toBase key) elt insert' _ (HT x _) [] = HT x insert' proto (HT Nothing y) key = insert' proto (HT (Just proto) y) key insert' p (HT (Just ar) y) (k:ey) = \val -> HT (Just $ newArray val) y where newArray val = ar//[(k,insert' p (ar!k) ey val)] - testMap=newMap (chr 0) (chr 255) id main = do print $ lookup (insert testMap "abc" (Just "def")) "abc" Make the difference between in minBase and maxBase larger in the call to newMap to prefer reads more. Note: This format seems awkward. I feel like I want to have the user to define an enumeration type e.g. data UpToFive = One | Two | Three | Four | Five instance Ix UpToFive where and have newMap::(Bounded base,Ix base)=>(key->[base]) -> MyMap base key elt But I can't figure out a nice way to auto-generate arbitrary size enumerations and manually doing so is too wearisome to contemplate. If you can generate these enumeration classes, then it would seem you could auto-derive functions that translate from an arbitrary key into [base]. -Alex- _ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
RE: [Haskell] performance tuning Data.FiniteMap
Is fixing GHC arrays a big research job or is it something that someone can straightforwardly handle if my site actually gets enough traffic to warrant it? -Alex- On Thu, 26 Feb 2004, Simon Peyton-Jones wrote: > | But in managing this tradeoff, what is faster: > | * constructing/destructing e.g. 16 trees (for a 65000 item table) > | * 2 memcpy of 256 item arrays (perhaps after you primop?) > | > | If the later is not dramatically slower than I > | will bias towards more arrayness. > > I doubt the latter is dramatically slower, but you'd have to experiment > to find out. And GHC is not doing as well as it should on arrays just > now. (One of the things on our to-do list.) Might vary between > implementations too. > > Simon > _________ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
RE: [Haskell] performance tuning Data.FiniteMap
I know that array copy is much more expensive than a single update. But when you say: On Wed, 25 Feb 2004, Simon Peyton-Jones wrote: > Haskell's current story is to use O(log n) structures such as trees -- Yes, I got that. The question is how I trade off between reads and writes. If I write very infrequently I may be willing to pay for varying levels of more arrayness (lower read time, higher update time). But in managing this tradeoff, what is faster: * constructing/destructing e.g. 16 trees (for a 65000 item table) * 2 memcpy of 256 item arrays (perhaps after you primop?) If the later is not dramatically slower than I will bias towards more arrayness. In this context, I believe that mmx insns in modern CPUs dramatically accelerate in-cache memcpy. If that is the case, then e.g. 256 element arrays could be optimal. > You should feel no need to apologise. Our community *needs* people like > you, who are using Haskell for real applications. That's how we'll > learn "where the shoe pinches". Ask away. Thank you. -Alex- _________ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] performance tuning Data.FiniteMap
On Tue, 24 Feb 2004, Hal Daume III wrote: > > It seems like updates could be very fast because I > > assume // is implemented with a fast memcpy > (//) is very slow Is that inherent in Haskell (or laziness) or is it just an artifact of the current GHC implementation? Would the problem be solved by making my arrays strict or by using Unboxed arrays? Is there a faster array implementation around? -Alex- PS I'm sorry if these are obvious beginner questions. I would really realy like to use Haskell for a production web application and am trying to work through the various issues. It is hard to find information on these sorts of things and the absense of field testing means you just have to ask these questions in advance. _________ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Use Radix for FiniteMap? (was Re: [Haskell] performance tuning Data.FiniteMap)
[Rewrote prior code to be cleaner] Isn't the following more efficient than Data.FiniteMap? class Ix a=>Radix a where maxRange::(a,a) class Radix a => HashKey b a where hashKey::b->[a] instance Radix Char where maxRange=(chr 0,chr 255) instance Radix a=> HashKey [a] a where hashKey x=x data HT radix elt = HT (Maybe (Array radix (HT radix elt))) (Maybe elt) emptyHT=HT Nothing Nothing emptyArray = Just (array maxRange [(x,emptyHT) | x<- [(fst maxRange)..(snd maxRange)]]) hLookup table key = hLookup' table (hashKey key) hLookup' (HT x y) [] = y hLookup' (HT Nothing _) _ = Nothing hLookup' (HT (Just ar) _) (k:ey) = hLookup' (ar!k) ey --insert table key val = insert' table (hashKey key) val insert' (HT x _) [] val = HT x val insert' (HT Nothing y) key val = insert' (HT emptyArray y) key val insert' (HT (Just ar) y) (k:ey) val = HT (Just $ ar//[(k,insert' (ar!k) ey val)]) y Isn't hLookup substantially faster than the binarySearch in FiniteMap for e.g. Strings? Doesn't insert compete with FiniteMap because small array copies should be blisteringly fast? Also, basic Haskell questions: * How do I get insert to typecheck? insert' works fine. * How do I hide the "lookup" automatically imported from GHC.List? -Alex- _ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com > On Tue, 24 Feb 2004, JP Bernardy wrote: > > > > > > I believe FiniteMap works by representing the data > > > in binary trees. It is therefore O(log2(n)) to > > > read and update. > > > > > > However, if my application reads many more times > > > than it writes, then perhaps I can get a > > > substantial performance boost by increasing the > > > branch factor on the tree. For example, if each > > > node was an array of 256 elements, reads would be > > > O(log256(n)), a 128x improvement! > > > > Not quite. > > > > In fact, O(log256(n)) is equivalent to O(log2(n)), > > because there is only a constant factor between the > > two. That's why basis of logarithms are usually > > omitted in O() expressions. > > > > Besides, the ratio between log256(n) and log2(n) is > > more like 8 than 128. (And you'd loose this factor > > in searching the right subtree, as Ketil pointed out) > > > > Tuning Data.FiniteMap probably is not what you want. > > > > I don't know, but you can have a look at > > Data.Hashtable. > > > > Just my 2 cents, > > JP. > > > > > > > > __ > > Do you Yahoo!? > > Yahoo! Mail SpamGuard - Read only the mail you want. > > http://antispam.yahoo.com/tools > > ___ > > Haskell mailing list > > [EMAIL PROTECTED] > > http://www.haskell.org/mailman/listinfo/haskell > > > > ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] performance tuning Data.FiniteMap
Ok. I just looked more carefully at FiniteMap and the Data.HashTable documentation and coded what I had incorrectly imagined would be there. Isn't the following more efficient than FiniteMap without requiring the IO Monad? - class MaxRange a where maxRange::(a,a) data HashTable key elt = HashTable (Maybe (Array key (HashTable key elt))) (Maybe elt) emptyHT=HashTable Nothing Nothing hLookup (HashTable x y) [] = y hLookup (HashTable Nothing _) _ = Nothing hLookup (HashTable (Just ar) _) (k:ey) = hLookup (ar!k) ey insert (HashTable x _) [] val = HashTable x val insert (HashTable Nothing y) (k:ey) val = HashTable (Just initArray) y where initArray = array maxRange [(x,if x/=k then emptyHT else insert emptyHT ey val) | x<-[(fst maxRange)..(snd maxRange)]] insert (HashTable (Just ar) y) (k:ey) val = HashTable (Just $ ar//[(k,insert (ar!k) ey val)]) y --support String keys instance MaxRange Char where maxRange=(chr 0,chr 255) -- It seems like the depth of the tree and therefore the speed of lookups is dependent on the size of maxRange and faster than the repetitive lookups in FiniteMap. I don't know how lookups compare to Data.HashTable. It seems like updates could be very fast because I assume // is implemented with a fast memcpy (though not as fast as the destructive updates in Data.HashTable) Note: I don't know how to avoid the namespace conflict with GHC.List.lookup so its hLookup. -Alex- _____ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com On Tue, 24 Feb 2004, JP Bernardy wrote: > > > I believe FiniteMap works by representing the data > > in binary trees. It is therefore O(log2(n)) to > > read and update. > > > > However, if my application reads many more times > > than it writes, then perhaps I can get a > > substantial performance boost by increasing the > > branch factor on the tree. For example, if each > > node was an array of 256 elements, reads would be > > O(log256(n)), a 128x improvement! > > Not quite. > > In fact, O(log256(n)) is equivalent to O(log2(n)), > because there is only a constant factor between the > two. That's why basis of logarithms are usually > omitted in O() expressions. > > Besides, the ratio between log256(n) and log2(n) is > more like 8 than 128. (And you'd loose this factor > in searching the right subtree, as Ketil pointed out) > > Tuning Data.FiniteMap probably is not what you want. > > I don't know, but you can have a look at > Data.Hashtable. > > Just my 2 cents, > JP. > > > > __ > Do you Yahoo!? > Yahoo! Mail SpamGuard - Read only the mail you want. > http://antispam.yahoo.com/tools > ___ > Haskell mailing list > [EMAIL PROTECTED] > http://www.haskell.org/mailman/listinfo/haskell > ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] performance tuning Data.FiniteMap
I believe FiniteMap works by representing the data in binary trees. It is therefore O(log2(n)) to read and update. However, if my application reads many more times than it writes, then perhaps I can get a substantial performance boost by increasing the branch factor on the tree. For example, if each node was an array of 256 elements, reads would be O(log256(n)), a 128x improvement! Note: I don't know what sort of penalty writes would have. In theory they would be 128x as expensive as well because each update would require copying 256 branches. However, in practice, I would bet that memcpy of 1k blocks can be optimized at the CPU so much that the difference might not be meaningful. Questions: Am I interpreting the performance issues correctly? Does FiniteMap used Arrays, Lists, or Algebraic Types? If arrays, is there an easy way to change the branching in FiniteMap? Do Haskell arrays do fast memcpy for small arrays? -Alex- _____ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] updating graphs non-destructively
Thank you for the link to FGL. I also looked at the boilerplate stuff. If *feels* like there should be a way to embed the graph stuff in the boilerplate stuff to allow non-destructive update of arbitrary object graphs without handcoding the mapping of the datastructure to an object graph? Is this possible? Has anyone does this? Any reason to believe its a bad idea? Alternatively, I could decide up front to represent my data as RDF triples (amenable to a graph system), but I'd rather take advantage of Haskell's type system. -Alex- _____ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com On Mon, 16 Feb 2004, andrew cooke wrote: > > have you looked at the functional graph library? i can't remember the > details, but think it is efficient and it is certainly a better way to > think about graphs :o) - the details are in the paper by erwig > > http://web.engr.oregonstate.edu/~erwig/fgl/ > > andrew > > S. Alexander Jacobson said: > > In imperative languages, updating an object in a > > graph is an O(1) operation. However, > > non-destructive update appears to be O(n) with the > > size of the graph. For example, suppose we were > > to implement an auction system like eBay: > > > > --Data structures > > data Bid = Bid BidId Auction User Price DateTime > > data Auction = Auction Seller Title Description [Bid] > > data User = User UserId Name [Auction] [Bid] > > > > --Top level database > > type Auctions = FiniteMap AuctionId Auction > > type Users = FiniteMap UserId User > > type Bids = FiniteMap BidId Bid > > type Database = (Auctions,Users,Bids) > > > > If I want to add a bid, it seems like I have > > to traverse the whole Database looking for objects > > that point to the bid. > > > > One alternative is to store pointers rather than > > values e.g. > > > > data Bid = Bid BidId AuctionId UserId Price DateTime > > data Auction = Auction SellerId Title Description [BidId] > > data User = User UserId Name [AuctionId] [BidId] > > > > But that makes graph traversal expensive as each > > edge traversal then costs O(log n). O(log n) may > > be okay for this app, but what if I was > > implementing Friendster/LinkedIn/Tribe/etc.? > > > > Is there a better way to think about this? > > > > -Alex- > > > > _ > > S. Alexander Jacobson mailto:[EMAIL PROTECTED] > > tel:917-770-6565 http://alexjacobson.com > > ___ > > Haskell mailing list > > [EMAIL PROTECTED] > > http://www.haskell.org/mailman/listinfo/haskell > > > > > > > -- > personal web site: http://www.acooke.org/andrew > personal mail list: http://www.acooke.org/andrew/compute.html > ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] updating graphs non-destructively
In imperative languages, updating an object in a graph is an O(1) operation. However, non-destructive update appears to be O(n) with the size of the graph. For example, suppose we were to implement an auction system like eBay: --Data structures data Bid = Bid BidId Auction User Price DateTime data Auction = Auction Seller Title Description [Bid] data User = User UserId Name [Auction] [Bid] --Top level database type Auctions = FiniteMap AuctionId Auction type Users = FiniteMap UserId User type Bids = FiniteMap BidId Bid type Database = (Auctions,Users,Bids) If I want to add a bid, it seems like I have to traverse the whole Database looking for objects that point to the bid. One alternative is to store pointers rather than values e.g. data Bid = Bid BidId AuctionId UserId Price DateTime data Auction = Auction SellerId Title Description [BidId] data User = User UserId Name [AuctionId] [BidId] But that makes graph traversal expensive as each edge traversal then costs O(log n). O(log n) may be okay for this app, but what if I was implementing Friendster/LinkedIn/Tribe/etc.? Is there a better way to think about this? -Alex- _ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
RE: [Haskell] GHC 64bit?
Dell's poweredge server with the max 12gb RAM costs $15k. Moore's law says that 1. you only need an extra bit/year. 2. processing that extra bit will cost 50% less next year Though perhaps processing time is not linear with the number of bits for historical/architectural reasons? I presume that, if we are conservative in the number of bits we use then we don't run into the ptr driven storage problem that Axel described. In particular, if we need only 2 or 4 times as much memory, we should only grow pointers by 1 or 2 bits...rather than growing immediately all the way to 64... Perhaps there could be a command line option to specify desired ptr size. I assume it is impossible to adjust pointer size dynamically at runtime depending on memory consumption (I assume you would do it at garbage collection time if it were possible). -Alex- _________ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com On Fri, 13 Feb 2004, Simon Marlow wrote: > > > alex: > > > Is there a maximum memory GHC can use/reach? > > > Specifically, can GHC address more than 4gb of > > > memory? > > > > SimonM may want to comment, but at the moment I think GHC is > > limited to > > 4G, but only due to lack of 64bit machines/demand on the developers. > > > > If you look in ghc/rts/MBlock.h you'll see some references to this. > > > > I *think* it would be easy to overcome this limit. > > > > (this should really be on glasgow-haskell-users) > > In principle, there's no problem. In practice, GHC's storage manager > needs a (fast) function of type > >Ptr a -> Bool > > to indicate whether a particular address is part of the dynamic heap or > not. This is currently implemented as a bytemap, which is 4k long on a > 32-bit machine. To cover the whole of a 64-bit address space, this > method isn't practical. Nevertheless, we can extend the bytemap to > cover a lot more real memory, as long as we have a way to map from > virtual memory addresses to elements of the bytemap - this is the tricky > bit. So far we haven't tackled this problem in a general way. > > Cheers, > Simon > ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] GHC 64bit?
Is there a maximum memory GHC can use/reach? Specifically, can GHC address more than 4gb of memory? -Alex- _ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] behavioral difference between GHC and GHCi
This code works echos lines properly under GHCi, but just sucks in input when running the GHC compiled executable. import IO main= do x<-hGetLine stdin putStr x main Am I doing something wrong? -Alex- _____ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
[Haskell] Making Haskell web development easier
I decided to try developing a web app using Haskell/HaskellWebServer/HaXML and found the barriers both very high and perhaps easily lowered. Could someone out there: -- 1. Provide a link to the Haskell web server CODE on <http://www.haskell.org/practice.html> (or, if it has been end of lifed, remove it from the CVS repository) 2. Document anon CVS access to haskell.org in some obvious location? 3. Add a README for the package saying * this the package doesn't build with enclosed Makefile * this the package doesn't buld on win32 because it depends on the posix package * STRONGLY suggest that people use the sourceforge package rather than this one --- Also, what does it take to get posix/HWS working on win32? I suspect that much of posix is easily portable, but that a few functions are not. I don't know if HWS is dependent on these functions? If HWS (and other packages) actually use the portable subset of posix, perhaps it makes sense to create a separate posix_LowestCommonDenominator package. If not, how much effort is involved in making the posix package work on win32? -Alex- _________ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: How to make reading an array from disk more efficient
On Wed, 24 Dec 2003, Hal Daume III wrote: > one other thing you might find useful is to read it imperatively and then > use unsafeFreezeArray (i think that's the name) to get a pure array out of > it. since all you'll be doing is reading, this should work nicely for > you. I must say, this sort of reply is EXTREMELY DISTURBING. It is ridiculous that one needs to get so intimate with implementation issues just to read a file! Is this inherent in Haskell or is it just that no one has bothered to create a the Haskell equivalent of BufferedReader? <http://java.sun.com/j2se/1.4.2/docs/api/java/io/BufferedReader.html> And if so, is there something about Haskell that makes a Haskell implementation of such a lib difficult? -Alex- _________ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com > On Wed, 24 Dec 2003, andrew cooke wrote: > > > > > Thanks. I should have added that I will only use the array for reading > > once it's created. I don't mind whether creating is lazy or eager (it's > > currently eager because I was fighting a space leak, but I think that was > > down to some other error). > > > > I don't fully understand how either of the suggestions you make will speed > > up creation, though (I guess allocating less memory is faster). I'm > > looking for a factor of tens of thousands improvement. But I will try > > what you suggest. > > > > Cheers, > > Andrew > > > > PS Thanks for the very speedy reply and thanks to whoever maintains the > > list archive for updating the archive for each message (I'm pretty sure it > > used to be made daily, which was a nuisance if you wanted to forward URLs > > to interesting discussions). > > > > Hal Daume III said: > > > (1) use unboxed arrays, otherwise you're wasting too much space with > > > pointers. that is, unless you need laziness on the elements, which i > > > don't think you do based on your list > > > > > > (2) (maybe) use imperative arrays; this will help you ensure that > > > everything is being evaluated quickly. > > > > > > -- > Hal Daume III | [EMAIL PROTECTED] > "Arrest this man, he talks in maths." | www.isi.edu/~hdaume > > ___ > Haskell mailing list > [EMAIL PROTECTED] > http://www.haskell.org/mailman/listinfo/haskell > ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: haskell httpd
Frustrating. Its been 4 years since I seriously looked at Haskell. I would have expected that over this time, someone here would have consolidated it into a language useful for real world applications. The Haskell in Practice page is shockingly short and many of the examples are not actually Haskell in practice. Does anyone here eat the Haskell dog food or is this all just fooling around (basic reasearch)? Simon, why did you write the Haskell web server? Ok. Is anyone running this web server at all? What is changing in Haskell that makes 3 year old code so uncompilable? -Alex- ___ S. Alexander Jacobson Check out my new blog!!! 1-212-787-1914 voice http://alexjacobson.com On Tue, 11 Nov 2003, Peter Simons wrote: > S Alexander Jacobson writes: > > > 1. Is there ssl support for the haskell httpd somewhere? > > Not that I'd know. > > > > 2. Does this httpd actually build w/ modern GHC? > > It probably will, but not out-of-the-box. The code hasn't > been actively maintained for a while. > > > > More complex question: [...] > > HWS-WP is an _experimental_ web server; it is by no means > ready for the kind of production set-up you apparently need. > > Peter > > ___ > Haskell mailing list > [EMAIL PROTECTED] > http://www.haskell.org/mailman/listinfo/haskell > ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: haskell httpd
Thank you for the discussion, but let me ask some more questions: Simple questions: 1. Is there ssl support for the haskell httpd somewhere? 2. Does this httpd actually build w/ modern GHC? 3. Why doesn't haskell.org run this httpd? More complex question: Assumptions: * This httpd can do >1000 req./sec. on modern CPUs (enough for me) * I write-ahead log all PUT/POST/DELETE requests before executing * I can reproduce server state at a given time by replaying the log * I checkpoint periodically so I don't have to replay the whole log * I treat each HTTP PUT/POST/DELETE as a discrete state transition Question: Can I make sure that I have concurrency only w/r/t client communication? I don't want the thread of control to change during state transitions. What happens if state is too big to fit in memory? * Does forkIO switch control if a thread writes to the disk but write-caching is enabled or if the thread access some part of memory that is swapped to disk? Optimization question: If I am using RAID, can I allow control to switch if I am handling multiple GET requests but lock for PUT/POST/DELETE requests? Notes: * Write-caching means the app is not slowed by disk writes * Write-ahead logs mean not worrying about crashes during writes * If necessary, I can scale up GET performance using caching and multiple CPUs each doing log recovery. -Alex- _______ S. Alexander Jacobson Check out my new blog!!! 1-212-787-1914 voice http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: haskell httpd
Thank you for the discussion, but let me ask some more questions: Simple questions: 1. Is there ssl support for the haskell httpd somewhere? 2. Does this httpd actually build w/ modern GHC? 3. Why doesn't haskell.org run this httpd? More complex question: Assumptions: * This httpd can do >1000 req./sec. on modern CPUs (enough for me) * I write-ahead log all PUT/POST/DELETE requests before executing * I can reproduce server state at a given time by replaying the log * I checkpoint periodically so I don't have to replay the whole log * I treat each HTTP PUT/POST/DELETE as a discrete state transition Question: Can I make sure that I have concurrency only w/r/t client communication? I don't want the thread of control to change during state transitions. What happens if state is too big to fit in memory? * Does forkIO switch control if a thread writes to the disk but write-caching is enabled or if the thread access some part of memory that is swapped to disk? Optimization question: If I am using RAID, can I allow control to switch if I am handling multiple GET requests but lock for PUT/POST/DELETE requests? Notes: * Write-caching means the app is not slowed by disk writes * Write-ahead logs mean not worrying about crashes during writes * If necessary, I can scale up GET performance using caching and multiple CPUs each doing log recovery. -Alex- _______ S. Alexander Jacobson Check out my new blog!!! 1-212-787-1914 voice http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
haskell httpd
I've been working on a new project and evaluating the Twisted python framework which relies on asynchIO rather than threading to achieve speed. The resulting idioms feel like they would be much more elegant in Haskell. So my question at this point is: Is there a reasonably efficient Haskell httpd implementation around that uses poll/select? -Alex- _______ S. Alexander Jacobson Check out my new blog!!! 1-212-787-1914 voice http://alexjacobson.com ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
MIME-RPC as an FFI (was Re: Storable Class)
On Fri, 4 Jan 2002, Sven Panne wrote: > Did you have a look at the FFI draft > http://www.cse.unsw.edu.au/~chak/haskell/ffi.ps.gz I just read this paper. It seems to focus on local interfaces rather than on interfaces that will also work accross network connections. A modern FFI interface should offer both transparent object serialization (like Java, but readable) and interfaces that work between programs running on different machines (RPC). MIME provides the infrastructure required for doing so. It defines both a serialization format and a public type system that individual languages may support in different ways. MIME is already in use in millions of web applications and web browsers. I've tried to document its use as an FFI at MIME-RPC.com (and created a discussion group for MIME-RPC related issues at http://groups.yahoo.com/group/mime-rpc). I had been thinking of it as a system for inter-process messaging (as well as object serialization), and created an implementation in Python for doing so. However, there are no Haskell or C implementations yet. I would welcome thoughts from people here on how to do both efficiently and how to use such implementations for in-process communication as well as network communication. -Alex- PS Dave Winer's Scripting.com just picked up MIME-RPC so I think the idea is gaining some traction. _______ S. Alexander Jacobson i2x Media 1-212-787-1814 voice1-603-288-1280 fax ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: GUI Library Task Force
Great. So that is something that goes into some library conventions document. Java has a convention that libraries should have reverse domain name structure. Is that how we should use _? -Alex- On Wed, 26 Sep 2001, Lennart Augustsson wrote: > "S. Alexander Jacobson" wrote: > > > On Wed, 26 Sep 2001, Manuel M. T. Chakravarty wrote: > > > > Given that Haskell98 is not ready for libraries anyway, why are you so > > > > concerned about it? > > > > > > It isn't? Why? Because of the lack of hierachical name > > > spaces? Then, C isn't ready for libraries either. > > > > As I posted in a prior thread: > > * Library namespace is broken > > So is C's, however it relies MUCH more strongly on build tools like > > Make. All popular languages that have been created in the last 20 years > > appear to have saner systems (Java, Perl, Python, etc). The problem is > > worse in Haskell because the number of built in libraries is very small > > and the code reuse goals are much more ambitious. > > Yes, the Library namespace is not ideal, and there has been a suggestion > to add `.' to module names to solve this. If we don't want to do that, just > use `_' as a name separator instead of `.'. This way you can build hierarchies. > > -- Lennart > > ___ S. Alexander Jacobson Shop.Com 1-646-638-2300 voiceThe Easiest Way To Shop (sm) ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
RE: GUI Library Task Force
On Wed, 26 Sep 2001, Manuel M. T. Chakravarty wrote: > > Given that Haskell98 is not ready for libraries anyway, why are you so > > concerned about it? > > It isn't? Why? Because of the lack of hierachical name > spaces? Then, C isn't ready for libraries either. As I posted in a prior thread: * Library namespace is broken So is C's, however it relies MUCH more strongly on build tools like Make. All popular languages that have been created in the last 20 years appear to have saner systems (Java, Perl, Python, etc). The problem is worse in Haskell because the number of built in libraries is very small and the code reuse goals are much more ambitious. * Library Interfaces This is a much bigger issue for Haskell than C. Are they monadic and which monad and should they really be arrows but then they will rely on non-standard syntax, etc. > > A GUI system without concurrency is still incomplete. > > There are loads of large GUI-based applications out there > that don't use concurrency. > > BTW, my point was not to say that we rule out concurrency. > I said, we do not demand it. (Same as in many GUI libraries > for the most widespread imperative languages.) Haskell will not be production quality without concurrency. If concurreny allows for a cleaner API and easier to use library, then use it. BeOS had deep concurrency throughout and was a much better OS as a result. Its 2001, there is no reason I shouldn't be able to have two threads drawing things on the screen simultaneously. If the issue is that we still don't know how to do concurrency in Hasskell, that seems like a MUCH higher priority than sorting out GUIs. > > The haskell library interface story is still pretty weak because there is > > no consensus about what monad they should expose (and whether they should > > really expose arrows or something else). Why not focus on rolling from > > H98 into a production quality Haskell system and make the best GUI for > > that system? > > Currently, there doesn't seem to be much interest in going > for a completely new version of Haskell. The idea of adding > addenda to H98 and so slowly and in incremental steps move > to more functionality seems to be more popular. Thats great. I don't disagree. Its just a matter of priority. So how about this agenda? Addendum .5?: Parametrized Libraries Addendum 1: Hierarchical Library Namespace Addendum 2: Concurrency Addendum 3: FFI Addendum 4: Exceptions Addendum 4.5?: Proc Syntax for Arrows Addendum 5: Library Interface Compatibility Guidelines Addendum 6: Enumeration of standard Haskell libraries (GUI etc) If we could focus on converging these addenda rapidly and in a particular order, then Haskell could reach a much better place much more quickly. Many of these things are individually small changes from H98, but together they make it a much stronger language. -Alex- PS I don't know if this agenda is in the right order. I do think that everyone randomly thrashing about with different language features will make everything take much longer. ___ S. Alexander Jacobson Shop.Com 1-646-638-2300 voiceThe Easiest Way To Shop (sm) ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
RE: GUI Library Task Force
Given that Haskell98 is not ready for libraries anyway, why are you so concerned about it? A GUI system without concurrency is still incomplete. The haskell library interface story is still pretty weak because there is no consensus about what monad they should expose (and whether they should really expose arrows or something else). Why not focus on rolling from H98 into a production quality Haskell system and make the best GUI for that system? -Alex- On Tue, 25 Sep 2001, Manuel M. T. Chakravarty wrote: > "Simon Peyton-Jones" <[EMAIL PROTECTED]> wrote, > > > | * Start from the API of GTK+ as a base line: > > > > That's fine by me. But can I suggest that the task force be sure > > to read the details of the Clean GUI library design. Peter Aachten > > (while visiting Cambridge) rendered a good chunk of it into Haskell, > > using MVars exactly as you describe. (The type structure became > > a good deal simpler than the Clean library when using MVars instead > > of state-passing.) > > > > The reason I think its worth a careful look is that the Clean folk have > > developed their library interface over a long period, and have a lot > > of experience in its use. It is not specific to any one platform or > > toolkit, nor does it attempt to implement everything, so its general > > goals fit with the ones you describe. > > I have looked at the Clean Object IO. While it is certainly > worthwhile to take from this approach whatever we can, I > have a number of reasons why I am not convinced that the > model itself should be adopted for the Haskell GUI: > > * Non-standard extensions: It requires concurrency and > judging by Section 6 rank-2 polymorphism. I am not quite > sure how crucial the later is. > > * It integrates facilities for concurrent and distributed > programming (asynchronous communication via channels). I > still think, we can keep the GUI API and concurrency as > two orthogonal features. If you take these features out > and use IORefs instead of MVars, you are already quite > close to the model that we currently aim at. > > * I am not a big fan of introducing an extra monad (`GUI' in > this case). It can easily become a pain in programs that > do a lot of "normal" IO as you have to lift all IO > functions to GUI. > > * After this, the main difference that remains is the > representation of GUI components as a vanilla data type > instead of opaque handles that do not make the structure > of the components explicit in the types (like the TupLS > does). From the paper, it wasn't clear to me how useful > that is for the application programmer. > > Cheers, > Manuel > > ___ > Haskell mailing list > [EMAIL PROTECTED] > http://www.haskell.org/mailman/listinfo/haskell > ___ S. Alexander Jacobson Shop.Com 1-646-638-2300 voiceThe Easiest Way To Shop (sm) ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Joy and Concatenative Programming
I just found out about a functional programming language called Joy (see http://www.latrobe.edu.au/philosophy/phimvt/joy.html). Joy differs from Haskell in that it has no variables. Instead, all functions are postfix, taking a stack as their argument and returning a stack as a result. Joy advocates contend that the elimination of variables and environments improves on functional languages in much the same way the elimination of state improved on imperative languages. In particular, they claim the result is massively simplified semantics and implementation (all the tree-rewriting rules go away), faster development, easier optimization (search and replace lists of functions) and even simple meta-programming. Here is a quick example program to give a flavor for how it works. [1 2 3 4] [dup *] map == [1 2 3 4] [square] map is the same as Haskell's map (\x->x*x) [1,2,3,4] == map square [1,2,3,4] Recursion is handled via some built in combinator primitives. For example this is qsort: qsort == [small] [] [uncons [>] split] [[swap] dip cons concat] binrec (the explanation of how this works is at http://www.latrobe.edu.au/philosophy/phimvt/joy/j01tut.html) Currently Joy is a interpreted language without a compiler checked type-system, but there is some discussion about adding such a system. (It is also apparently non-lazy, but I'm not sure what that means in a world without variables). The system is very new and primitive, but looks promising. I am curious what the wiser and more experienced people on this list think of Joy's approach. -Alex- ___________ S. Alexander Jacobson Shop.Com 1-646-638-2300 voiceThe Easiest Way To Shop (sm) ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: RFC: GUI Library Task Force
Try http://www.cs.uu.nl/people/ralf/hw2001/3.html I originally saw it on lambda: http://lambda.weblogs.com/2001/09/07 -Alex- On 24 Sep 2001, Carl R. Witty wrote: > "Manuel M. T. Chakravarty" <[EMAIL PROTECTED]> writes: > > > + More sophisticated approaches (that often require > > language extensions or are still experimental) can be > > implemented on top of this basic API - eg, FranTk, > > Yahu, Fruit, iHaskell, etc. > > I keep seeing references to Fruit (as a "functional" Haskell GUI) but > I can't find any real information about it. Is there a web page for > it? > > Carl Witty > > ___ > Haskell mailing list > [EMAIL PROTECTED] > http://www.haskell.org/mailman/listinfo/haskell > ___ S. Alexander Jacobson Shop.Com 1-646-638-2300 voiceThe Easiest Way To Shop (sm) ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: The future of Haskell discussion
I don't know that it is possible for Haskell98 to have industrial strength libraries (even if the authors wanted to finish and support them). * Library namespace issues are not resolved There is no guarantee that a library with a given name will not conflict with another library of the same name. There have been various discussions of creating nested namespaces but they are not part of Haskell98. Without a clean namespace, libraries cannot rely on and integrate with each other. (Note: the related issue of library version management also needs some resolution). * Parametrized libraries There have been various discussions of importing libraries in ways that allow the importer to control which types and classes the library actually uses. I think this issue may have been dropped but it creates some uncertainty for library authors w/r/t firming up library structure. * Library interfaces This is going more over my head. But I believe that there is an issue of choosing a standard library interface. If I understant things correctly, whether a library interface is a Monad (and which monad) or an Arrow (and which arrow) or an X substantially constraints what libraries can be used with it. It is perhaps the case, that these constraints lighten considerably in the presence of existential types (or fundeps?) and that Arrows are more viable in the presence of Proc syntactic sugar, but none of these are part of Haskell98 even if they are really likely to be part of Haskell2. Unless I am wrong about all of the above, it seems more important to converge Haskell to a language that can support industrial strength libraries and that is Haskell2 rather than Haskell98. -Alex- _______ S. Alexander Jacobson Shop.Com 1-646-638-2300 voiceThe Easiest Way To Shop (sm) On Sun, 16 Sep 2001, Bill Halchin wrote: > > > > Jeff has hit the nail on the head .. thanks Jeff. You said eloquently what I was >hinting at > > or saying very implicit (because I didn't know how to say it eloquently). The >"Haskell > > library" seems to be contributions by individuals (who should be commended!!), but as > > an "industrial" programmer who writes in imperative languages everyday (and sees > > them as many times getting in the way, e.g. C++, and not modeling a particular > > problem very elegantly!), with Haskell I would like to see a library API part of the > > Haskell Report, i.e. a nice list of type signatures by topic, e.g. numeric. (maybe >this > > is already the situation ... I unfortunately have not had a lot of chance to write > > Haskel code even though I like FPL's and Haskell in particular). The haskell library >API > > should be part of the Haskell standard just as the standard C library is part of the > > ANSI C standard! > > Regards, Bill Halchin > > >From: Jeffrey Palmer > >To: [EMAIL PROTECTED] > >Subject: Re: The future of Haskell discussion > >Date: 14 Sep 2001 17:06:49 -0500 > > > >On Fri, 2001-09-14 at 15:12, Mark Carroll wrote: > > > On Fri, 14 Sep 2001, Bill Halchin wrote: > > > > > > > Probably this question has been brought before. Besides the Preludes, > > > > why doesn't > > > > > > > > Haskell have libraries like Java, Squeak (Smalltalk). I found this: > > > (snip) > > > > > > I'm puzzled - it does! - see http://www.haskell.org/libraries/ for some of > > > them. > > > > > > >I think the question is more along the lines of "Why doesn't Haskell > >come bundled with complete, useful and _supported_ libraries?" > > > >For example, the Edison home page describes the library in this way: > > > >"in its current state, the library is mostly a framework. That is, I > >provide signatures, but not yet very many implementations..." > > > >This is not the type of thing that your standard software engineer wants > >to hear. Professional software developers need to be highly productive, > >and are often unwilling to invest time learning libraries that aren't > >part of the core language environment. However you feel about the > >design of the Java Collections API, at least it's a supported part of > >the language. Developers feel comfortable that any time spent learning > >the how to use these APIs is worthwhile. > > > >I felt this very recently when looking for a quality GUI framework for > >Haskell. There appear to be many(!) libraries available, and all seem > >to be in various states of completion. Personally, I would like to see > >someone complete the port of the Clean library that
Re: The future of Haskell discussion
If the GUI is based on the IO monad, then it doesn't seem like there is a lot of advantage to doing it in Haskell. It seems like a better idea to use a more natural language for IO and make RPC/interproc calls to a haskell server to get stuff done. In other words, what is the value of the GTK+ haskell interface? Shouldn't more effort be put into getting Fruit production quality and/or figuring out how to use arrows to manage textual and network IO? -Alex- _______ S. Alexander Jacobson Shop.Com 1-646-638-2300 voiceThe Easiest Way To Shop (sm) On Fri, 14 Sep 2001, Manuel M. T. Chakravarty wrote: > "S. Alexander Jacobson" <[EMAIL PROTECTED]> wrote, > > > Out of curiosity, how does GTK+ compare with Fruit? > > GTK+ has a C API heavily based on call backs and mutable > state. Thus, the Haskell transcription of that API heavily > relies on the use of the IO monad - as does H98 textual IO. > > > It seems like it would make sense for the standard Haskell GUI also to be > > functional. > > A functional GUI would be nice, but standard Haskell text > and file I/O is not functional either. Functional GUIs like > Fruit are from a research perspective very interesting, but > their design is rather far from being a solved problem, > which makes them a not very likely candidate for a standard > that people seem to like to have sooner rather than later. > > Cheers, > Manuel > > > ___ > Haskell mailing list > [EMAIL PROTECTED] > http://www.haskell.org/mailman/listinfo/haskell > ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: The future of Haskell discussion
Out of curiosity, how does GTK+ compare with Fruit? It seems like it would make sense for the standard Haskell GUI also to be functional. -Alex- PS I don't do GUI stuff so I don't really know much. I did read the Fruit paper and it looked interesting. On Fri, 14 Sep 2001, Manuel M. T. Chakravarty wrote: > Wolfgang Jeltsch <[EMAIL PROTECTED]> wrote, > > > That's why I > > think GTK+ should be used mainly to develop applications which are intended > > to run under GNOME and preferably not to do cross-plattform GUI programming. > > I think the best solution for the latter thing is to use a library which has > > multiple implementations based on different "native" libraries like Win32, > > GTK+, Qt. wxWindows (http://www.wxwindows.org/) is an example for this kind > > of library. > > wxWindows is quite C++ centric and AFAIK nobody has made a > serious effort at a C++ FFI yet. One of the big advantages > of GTK+ is that it was written with bindings for other > languages in mind. Therefore, it is probably the toolkit > with the most language bindings. > > One alternative would be to standardise on a kind of subset > of the GTK+ API and then somebody with a lot of spare time > could implement that on top of the Win32 API natively - in > the meantime, the original GTK+ libraries would at least > provide some form of implementation under Win32. (You > should bear in mind that anything that doesn't build on > existing infrastructure involves a lot of coding and I > haven't seen many volunteers stepping forward yet.) > > Manuel > > ___ > Haskell mailing list > [EMAIL PROTECTED] > http://www.haskell.org/mailman/listinfo/haskell > ___ S. Alexander Jacobson Shop.Com 1-646-638-2300 voiceThe Easiest Way To Shop (sm) ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: The future of Haskell discussion
Rather than talking about general features of the language that might improve adoption in general, it is more useful to talk about specific features of the language that make it killer in a particular application domain. In his classic book, "Crossing the Chasm : Marketing and Selling High-Tech Products to Mainstream Customers", Geoffrey Moore argues that the way to gain mainstream adoption of a new technology is to target specific segments of the customer population and to deliver whole product to them (because integration challenges are daunting). In http://www.paulgraham.com/lib/paulgraham/bbnexcerpts.txt, Paul Graham argues that: One of the reasons to use Lisp in writing Web-based applications is that you *can* use Lisp. When you're writing software that is only going to run on your own servers, you can use whatever language you want. And further that: Until recently, writing application programs meant writing software to run on desktop computers. In desktop software there was a strong bias toward writing the application in the same language as the operating system. I would add that web based applications can use web interfaces and that HTML is a good interface to many applications. (and that Paul Graham's comments about Lisp are also true of Haskell) As such, I would like to see a focus on making Haskell great for web application and web service development. Some of the the pieces required are application level, some are libraries, and some are language features. Here is my quick take: Application Framework * a simple build/install process on both unix and win32 * a way to run/link haskell applications to a web server (apache) * a decent libary organization and CPAN-like library sharing system * a system for publishing apps to live servers Libraries * an OS file/directory access library * a database connection library (even just ODBC would be fine!) * a database connection pool library * a mail handling library * an XML parser library * an XML-RPC/SOAP library Language Features * concurrency (to make requests to multiple servers simultaneously) * FFI (to access libraries in other languages) * exceptions (may not matter depends on the webserver/haskell interface!) Documentation * an O'Reilly class book on learning and developing web apps in Haskell * sample applications that demonstrate useful web service functions * a process for managing Haskell web app development As a general matter, the addendum process strikes me as confusing and dangerous. I don't want to have a conversation like: I am using Haskell'98 with Addendum A, C, and E. I'd rather say, I am using Haskell 2001 and know that it is useful for developing web apps. I know this is a lot of work, but it was what you get from Python, Perl, and Java. If Haskell wants to compete in this arena, it needs to provide this level of service. Also, I think a lot of these exist in pieces, so the real work is in compiling it all into a good usable package. I am not volunteering to do it, but I would be happy to help beta if someone else does. -Alex- PS There may be other better/easier initial application domains for Haskell, but this is what I know. _______ S. Alexander Jacobson Shop.Com 1-646-638-2300 voiceThe Easiest Way To Shop (sm) On Fri, 14 Sep 2001, Manuel M. T. Chakravarty wrote: > Olaf Chitil <[EMAIL PROTECTED]> wrote, > > > Here a short summary by Malcolm and me of the final discussion at the > > Haskell workshop: > > I also took a couple of notes which I like to add. > > > John Launchbury and many further people made a plea that the single > > biggest hindrance to the further spread of Haskell is the lack of a > > standard > > cross-platform GUI. Alas, no answer to the problem was found. There is > > no agreement which (existing) library could be the basis of a standard > > one and nobody wanted to commit himself to developing and supporting > > such a library. Well, Manuel Chakravarty promised to continue developing > > the GTK+ binding and would be happy about people helping him. (The GUI > > library presented at the workshop is not intended to solve the standard > > GUI problem.) > > In fact, the recent release of binary packages for Gtk+HS > and an example applications that demonstrates how to use > the GTK+ API in Haskell have been a reaction to the > discussions at HW & ICFP. > > Let me reiterate: Gtk+HS as it is today is sufficient for > applications requiring a GUI of medium complexity. As far > as I see, despite not covering all of GTK+ yet, Gtk+HS > already has a wider variety of widgets and functionality > than Tcl/Tk provides in its whole API. So, at least on > Unix, the statement that there is no GUI for Hask
Re: Another question wrt hiding imports
I asked this a while ago and never got an answer. Asking again in better context: How do you control importing operator precedence? Suppose that you have: f x = 2 + 2 * x And an imported module increases the precedence of (+). You end up getting mangled. My assumption is that the only way to protect yourself from this is to import qualified so A.+ has a different precedence from +. But I don't think the report promises this. The report just promises that A.+ has the same precedence as (+) inside module A. -Alex- _______ S. Alexander Jacobson Shop.Com 1-646-638-2300 voiceThe Easiest Way To Shop (sm) ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Why is there a space leak here?
On 6 Jun 2001, Carl R. Witty wrote: > "S. Alexander Jacobson" <[EMAIL PROTECTED]> writes: > > > For example w/ foldl: > > > > foldl + 0 [1..1] > > foldl (+) ((+) 0 1) [2..1] > > foldl (+) ((+) ((+) 0 1) 2) [3..1] > > > > Can't the implementation notice that each iteration leads to a > > larger closure and, if it is running out of space go ahead an just > > evaluate (+) 0 1? > > It's complicated. You can't (in general) know whether application of > a function will increase or decrease the space used. If you were > running out of space, would you just search the whole unevaluated > program graph for reductions which somehow seemed "likely" to reduce > the space used? Would you add such reduction nodes to some global > list at the time they were created? I'm not clear why you can't in general notice that you are using more space after function application than before. I it hard to see why a program couldn't do the analysis I just did on foldl. You could accumulate statistics on funtions that increase/decrease space used at runtime and evaluate those that do reduce space used... > > I realize that there is a risk of evaluating _|_ unnecessarily, but if you > > are otherwise going to run out of memory, you might as well give it a > > shot. > > > > In practice, how often do you expect to see growing expressions that cover > > a _|_ that are not actually an error in any case? > > It's certainly possible. You are trading off the likelihood that an exploding expression contains a bottom against the liklihood that the programmer would prefer the exploding expression not to explode. Much of this type of work can be done as test-time warnings > One portable way to implement a memoizing function in Haskell (if the > domain of the function is countable) is to lazily build a data > structure that contains the results of the function on every possible > argument. Then you evaluate the portions of the data structure that > you need; the result on each argument is only evaluated once. This > probably would count as a "growing expression", and it's certainly > possible that the function on some arguments would be bottom. I don't think I understood this. Can you clarify? -Alex- ___ S. Alexander Jacobson Shop.Com 1-646-638-2300 voiceThe Easiest Way To Shop (sm) ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Why is there a space leak here?
This whole discussion seems strange... Is laziness an operational or a semantic issue? Why can't haskell implementations reduce some expressions to save space? In particular, why can't haskell mark expressions that grow after evaluation, and reduce them if too much space is being consumed. For example w/ foldl: foldl + 0 [1..1] foldl (+) ((+) 0 1) [2..1] foldl (+) ((+) ((+) 0 1) 2) [3..1] Can't the implementation notice that each iteration leads to a larger closure and, if it is running out of space go ahead an just evaluate (+) 0 1? I realize that there is a risk of evaluating _|_ unnecessarily, but if you are otherwise going to run out of memory, you might as well give it a shot. In practice, how often do you expect to see growing expressions that cover a _|_ that are not actually an error in any case? Hunting down memory leaks is already so obscure, that you might as well take a shot at solving the problem automatically... Alternatively, is there some magical way of warning about leaky expressions at compile time? You don't have to ban them, but it would be nice if the programmer were aware of which parts of the code are likely to grow... -Alex- On Tue, 5 Jun 2001, Tom Moertel wrote: > Alastair David Reid wrote: > > > > Executive summary: David's program has an incredibly subtle space leak > > in it (or I'm being incredibly dumb). I encourage the honchos (and > > would be honchos) to have a look. Users of other compilers might give > > it a shot too. > > > David Bakin wrote: > > > > Why is there a space leak in foo1 but not in foo2? > > The reason that foo1 "leaks" space is because the middle of v grows > faster than its head. So taking elements from v causes its in-memory > footprint to grow. To see why this is the case, evaluate foo1 by hand: > > > -- This has a space leak, e.g., when reducing (length (foo1 100)) > > foo1 m > > = take m v > > where > > v = 1 : flatten (map triple v) > > triple x = [x,x,x] > > Focusing on just v for now, and letting f = flatten for notation > purposes, we have > > (1) v = 1 : f (map triple v) > > (2) = { unwrap v } > 1 : f (map triple (1 : f (map triple v))) > > (3) = { eval map } > 1 : f (triple 1 : map triple (f (map triple v))) > > (4) = { eval triple } > 1 : f ([1,1,1] : map triple (f (map triple v))) > > (5) = { eval f (= flatten = foldr (++) []) } > 1 : 1 : 1 : 1 : f (map triple (f (map triple v > > In order to expose elements 2-4 of v, we had to evaluate v to the extent > that the overall expression held in memory *grew*. Notice how in (1) we > had a single (f (map triple ...)) expression in the tail of v but in (5) > there are two such expressions, nested. > > Continuing further, if we want to expose the 5th-7th elements of v, we > have to expand the expression yet even more. Noticing that the (f (map > triple v)) subexpression in (5) is identical to the tail of (1), we can > apply the same expansion that we derived in (1)-(5) to yield > > (6) = { repeat (1)-(5) for f (map triple v) in (5) } > 1 : 1 : 1 : 1 : > f (map triple (1 : 1 : 1 : > f (map triple ( > f (map triple v))) > > (7) = { eval map } > 1 : 1 : 1 : 1 : > f (triple 1 : map triple ( > f (map triple ( > f (map triple v > > (8) = { eval triple } > 1 : 1 : 1 : 1 : > f ([1,1,1] : map triple ( > f (map triple ( > f (map triple v > > (9) = { eval f } > 1 : 1 : 1 : 1 : 1 : 1 : 1 : > f (map triple ( > f (map triple ( > f (map triple v) > > Notice how in (9) we have three nested (f (map triple (...))) > expressions in the tail of v whereas in (5) we had only two and in (1) > we had but one? > > Now you can see why foo1 has a space "leak": In order to take the Nth > element of v, v's definition must be expanded to the point where there > are 1+(N+1)/3 (f (map triple (...))) subexpressions in the tail of v > *that will never be reached*. In other words, v's "middle" grows faster > than its head, ensuring that take will never consume the tail. Taking > elements from the head only makes the middle grow larger. The more your > take, the larger it grows. > > So the problem isn't Hugs but rather the definition of v, which grows > faster than it can be consumed. > > Cheers, > Tom > > ___ > Haskell mailing list > [EMAIL PROTECTED] > http://www.haskell.org/mailman/listinfo/haskell > ___ S. Alexander Jacobson Shop.Com 1-646-638-2300 voiceThe Easiest Way To Shop (sm) ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Happy and Macros (was Re: ANNOUNCE: Happy 1.10 released)
Combining two threads... Like macros and preprocessors, Happy generates code. I assume the justification for this is that hand-coding a parser in Haskell is presumed to be too difficult or that it is too hard to get the right level of abstraction (and therefore a macro-like facility is required). However, I've also used Hutton & Meijer style monadic parsers and found them extremely elegant, clean, and easy to use in both Haskell and Python (though in Python they were too slow for my xml application -- function call overhead is _very_ high in Python). I am not a parsing expert, but given the recent discussion on macros, I have to ask: why use happy rather than monadic parsing? Monadic parsing allows you to avoid a whole additional language/compilation step and work in Hugs (where you don't have a makefile). What does Happy buy you here? And generalizing from the above, since Monads/Arrows are types that describe a computation declaratively and Macros are functions that describe a computation procedurally, is it possible that, for any application of Macros, you can find a suitable Monad/Arrow? Or am I not understanding either well enough? -Alex- _______ S. Alexander Jacobson Shop.Com 1-646-638-2300 voiceThe Easiest Way To Shop (sm) ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
RE: Haskell jobs (fwd)
Folks, I have seen a few answers here. Let me add a few other constraints: * we are unix shop so win32 solutions don't work well here * the codebase needs to be production qualty (handle millions of hits per day) * there should be a network of users (or a support organization) running and supporting the software * we are largely running Java as our platform so easier Java integration is important. > S. Alexander Jacobson writes: > > Off the top of my head here are some Haskell specific things that we need: > > > * HSP pages (like ASP or JSP or PHP) > > Erik Meijer has done this. Can't find the preprint online, though. (Erik?) Is this production quality. Proof-of-concept implementations don't cut it. > > * in memory Haskell server analogous to JServ that talks to apache > > mod_haskell? > http://losser.st-lab.cs.uu.nl:8080/ Same question. It appears to be 0.1. > > * Haskell access to a pool of database connections > > Daan Leijen's HaskellDB? > http://haskell.cs.yale.edu/haskellDB/ Windows Only. > > * Haskell access to Java classes > > Erik's Lambada > http://www.cs.uu.nl/people/erik/Lambada.html I know about lambada. It is experimental. According to the homepage: "The current release does not offer much tool support to access Java classes from Haskell yet" It is also windows only. Is anyone here using it? > > * Encapsulation of Haskell as Java classe > > I don't know what that means, exactly. You mean a Hugs-like implementation in > Java? Not a bad idea... do you need that, though, now that GHC can produce > Java bytecode? Anyway, once the Java backend stabilizes, you would (in > principle, at least :) be able to cross-compile GHC into a Java binary, and > then use its upcoming interactive frontend. You still wouldn't have > programmatic hooks (i.e., a Java-level rather than console-level interface) to > the front-end, but it would become much easier to add. Actually, the ability generate Java bytecode would be a BIG win here if GHC generated code could smoothly cal Java classes as well. (then you get dbpools from java!) > > And all of this has to be relatively zipless and compatible with an > > existing JServ/JSP/Apache installation. > > Eh? "zipless"? It has to be relatively easy to configure and install. Overall, Haskell has a lot of promise. It is just not there. Perhaps when I get the right people, we will take a look and discover that the hurdles aren't so great. Mainly I need developers who would explore this. If I get that, the rest is easy. -Alex- ___ S. Alexander Jacobson Shop.Com 1-212-420-7700 voiceThe Easiest Way To Shop (sm)
RE: Haskell jobs (fwd)
I think many of the issues were discussed with great clarity on slashdot. If we get the relevant critical mass of functional programmers, you will definitely be hearing from us. Off the top of my head here are some Haskell specific things that we need: * HSP pages (like ASP or JSP or PHP) * in memory Haskell server analogous to JServ that talks to apache * Haskell access to a pool of database connections * Haskell access to Java classes * Encapsulation of Haskell as Java classe And all of this has to be relatively zipless and compatible with an existing JServ/JSP/Apache installation. I am leaning towards python because it has curry, map, filter, lambda expressions and a Java interpreter. -Alex- ___ S. Alexander Jacobson Shop.Com 1-212-420-7700 voiceThe Easiest Way To Shop (sm) On Sun, 16 Jul 2000, Manuel M. T. Chakravarty wrote: > "S. Alexander Jacobson" <[EMAIL PROTECTED]> wrote, > > > I am President of Shop.com and know and love Haskell (and FP in general). > > Nonetheless, the issue with using Haskell commercially is that: > > * we cannot afford the training costs associated with getting new > > programmers up to speed and finding programmers at all (much less Haskell > > programmers) is fairly difficult. > > * Haskell interfaces with databases and existing Java libraries are not > > all that complete/clean > > Could you maybe tell us what it is that you need with > respect to database interfaces and Java libraries (and where > appropriate where deficiencies are in existing solutions)? > I think, many people here would be happy to help plugging > holes if that makes Haskell more usable, but it is sometimes > a bit difficult for us to see which holes bite users most. > > Thanks, > Manuel >
RE: Haskell jobs (fwd)
I am President of Shop.com and know and love Haskell (and FP in general). Nonetheless, the issue with using Haskell commercially is that: * we cannot afford the training costs associated with getting new programmers up to speed and finding programmers at all (much less Haskell programmers) is fairly difficult. * Haskell interfaces with databases and existing Java libraries are not all that complete/clean All that being said, we are just now building our team out. If we can get a critical mass of good functional programmers who want to work in our NY office, I would definitely be willing to explore developing our site using FP techniques if not languages. Shop.com is a pre-IPO idealab!-backed company. We now have funding and are looking for people in New York City who have: * competence with Java, JDBC and SQL * competence with functional programming (in python or haskell) * experience with CVS, Linux, and Apache * experience building database backed web sites We are also trying to establish an enjoyable environment for developers. The progamming style here will follow a mix of XP and McConnell style: * daily huddle (reduces meetings) * daily build & smoke (the site must always be publishable) * pair programming (rather than code reviews) * bugs fixes come first * active risk management * UI prototype and acceptance tests before coding If you are interested, please send your info to [EMAIL PROTECTED] -Alex- ___ S. Alexander Jacobson Shop.Com 1-212-420-7700 voiceThe Easiest Way To Shop (sm) On Fri, 14 Jul 2000, Benjamin Leon Russell wrote: > Speaking of Haskell- and Haskell-related- jobs, does anybody know of any good Haskell internships? Somebody told me that there are many more academic opportunities in functional programming in England than here, but what about non-academic ones? > > --Ben Russell > [EMAIL PROTECTED] > [EMAIL PROTECTED] > > On Thu, 13 Jul 2000 15:44:27 -0500 > Brett Letner <[EMAIL PROTECTED]> wrote: > > Group - > > > > Does anyone out there know of any haskell, or haskell > > related jobs in the > > U.S.? > > If so, please let me know so that I can send a resume (or > > you can find it > > on the internet at resumes.dice.com/bletner). > > I've searched sites like hotjobs.com and monster.com but > > haven't found > > much. > > > > I appologize for sending this since it is off-topic, but > > as payment I offer > > a few tidbits you might enjoy... > > 1) If you want some blank stares, start talking to a > > contracting company > > sales-rep about haskell and why you want to work with it. > > 2) If you are lonely, post a resume to the major job > > sites with C++ and > > UNIX and indicate you are willing to relocate. > > 3) Anybody think there is a market for teaching haskell > > in industry? This > > is about the only way I can think of to earn a living and > > work with haskell > > at the same time. > > 4) Should I just try for a Ph.D. and wait for the rest > > of the country to > > catch up? > > > > Any ideas or leads would be appreciated. > > Thanks > > Brett Letner > > [EMAIL PROTECTED] > > > > > > > > > > > >
FRP/FRAN vs O'Haskell
Can someone give a brief comparison of the FRP approach with O'Haskell? Both frameworks seem to revolve around asynchronous interaction between objects in continuous time. The O'Haskell folks argue that you need a new language to express this activity well. The FRP folk seem happy with Haskell as is. FRAN handles events (e.g. mouseclicks). Is there any reason it couldn't handle network events too? Where does FRP fail such that a new language is required? If the two systems are complimentary, how would FRP be enhanced by O'Haskell? -Alex- _______ S. Alexander Jacobson Shop.Com 1-212-697-0184 voiceThe Easiest Way To Shop (sm) On Fri, 19 May 2000, Paul Hudak wrote: > > Has anyone built any block simulators (for modeling continuous > > electronic systems, like OP Amps, RC networks, etc) in Haskell? > > There have been several replies to this already, but permit me to add my > 2 cents worth: > > FRP ("Functional Reactive Programming") is an abstraction of Fran > ("Functional Reactive Animation") that is ideally suited to describing > such things, since it is based on continuous (time-varying) values, as > opposed to discrete values. You can find out a lot about Fran from > Conal Elliott's home page (http://www.research.microsoft.com/~conal) and > from my book (http://haskell.org/soe), and about FRP at > http://haskell.org/frob. My student Zhongong Wan and I also have a new > PLDI paper on the formal underpinnings of FRP if anyone is interested > (it's not on the web yet). > > As for Haskore: > > > I'm also interested in this. I am thinking of extending > > Paul Hudak's Haskore system to generate and handle true audio data > > (instead of, or in addition to) MIDI data. > > > > I don't think I'll have enough time to do the programming myself, > > but since I'll be using Hudak's book in next term's course, > > I hope I can attract some students, and set them in the right > > direction. > > > > In fact one student who read the course announcement > > (and the book's web page) already asked me > > about functional audio signal processing. > > The latest release of Haskore (http://haskell.org/haskore) includes an > interface to Csound. That is, one can wire up oscillators, modulators, > special effects, etc. in a nice declarative style in Haskell, which then > gets compiled into a Csound instrument file, which in turn gets compiled > by Csound into actual sound files (.wav, .snd, etc.). The nice thing > about this is that it's fairly efficient because of the back-end > processing. To do this in FRP would be much less efficient. > > Hope this helps, > > -Paul >
FYI: Mozilla Languages and RDF Inference
There is a lot of talk right now of building new languages into Mozilla. Mozilla RDF / Enabling Inference http://www.mozilla.org/rdf/doc/inference.html This page discusses using SWI-Prolog for inference on RDF markup and the Semantic Web (see http://www.w3.org/DesignIssues/Semantic.html). The author is looking for suggestions of better/more-appropriate languages for this task. XSB is on this list, why not Mercury, Charity, or Haskell? Report from the Mozilla Developers Meeting http://slashdot.org/article.pl?sid=00/04/08/1110208&mode=thread This discussion veered into that of better client side scripting in Mozilla. Some lobbying for an open language implementation (or our favorite language), might be appopriate. -Alex- ___ S. Alexander Jacobson Shop.Com 1-212-697-0184 voiceThe Easiest Way To Shop
RE: runtime optimization of haskell
The issue is not just generating the data for a particular running instance, but rather: 1. generating the data from all/many running instances on user machines 2. sending that data back to the developer for analysis 3. generating useful reports on aggregate data Jacob Nielsen says that data from 5 users is enough to address most user porblems so the above may be overkill, but I think it depends on the problem you want to solve (performance optimization being somewhat more complex). In any case, the point is to collect data on behavior of the code under real life operating conditions rather than the test conditions of the lab. -Alex- ___ S. Alexander Jacobson Shop.Com 1-212-697-0184 voiceThe Easiest Way To Shop On Mon, 27 Mar 2000, Simon Peyton-Jones wrote: > One such tool is coming up. GHC is producing cost-centre stack logs, which > we feed into Stephen Jarvis's call-graph profile browser. It highlights the > 'hot' path for you. It's fantastic! > > Our goal is to release it with the next GHC release, but it depends a bit > on Stephen's time. > > If this isn't what you meant, maybe you can say more precisely what > you were after? > > Simon > > | -Original Message- > | From: S. Alexander Jacobson [mailto:[EMAIL PROTECTED]] > | Sent: 28 March 2000 02:43 > | To: [EMAIL PROTECTED] > | Cc: [EMAIL PROTECTED] > | Subject: Re: runtime optimization of haskell > | > | > | Good web developers do log analysis to figure out what users > | like and what > | they don't and to figure out which paths to shorten. > | > | It would be nice if haskell programs could generate similar "access > | logs" or traces that would feedback to the programmer or maintainer > | information about which functions were accessed when and how. > | > | The developer, knowing a lot about what is causing the user grief and > | the domain in which the user is spending most of his/her time > | could better > | optimize the interface and specification as well as the > | implementation of > | future versions. > | > | A trace-feedback implementation would be a great way to give > | developers > | feedback and would do much more to improve software than any > | type of rt > | optimization. > | > | -Alex-- > | > | ___ > | S. Alexander Jacobson Shop.Com > | 1-212-697-0184 voiceThe Easiest Way To Shop > | > | > | On Thu, 23 Mar 2000, Jan-Willem Maessen wrote: > | > | > I'm not currently working on dynamic compilation, but I went to > | > Vasanth Bala's talks at the Dynamic Compilation Workshop before PoPL > | > and later at MIT. I, too, was intrigued by the possibility of doing > | > this sort of dynamic compilation for functional languages. > | I've given > | > the technical points some thought, so I thought I'd share what I'd > | > come up with. > | > > | > First, Dynamo, Crusoe, and their ken focus on optimization > | of run-time > | > program traces. It was quite clear from Bala's talk that traces > | > should not be assumed to match function boundaries; we may compile > | > some trace through a function (the recursive case, for > | example) while > | > interpreting the rest (eg. the base case). The efficiency of the > | > technique depends on the fact that we compile _only_ the > | code which is > | > being frequently executed, as we have a limited cache to hold that > | > compiled code. > | > > | > The biggest promise of this approach is that it eliminates most > | > unknown control flow. In C code this is things like function return > | > (a trace doesn't necessarily end when a function returns), calls to > | > dynamically-linked functions, and calls to function pointers/virtual > | > functions. All of these are far more common in Haskell than in C; > | > this makes the technique look very attractive indeed. > | > > | > On the other hand, Dynamo at least does not always succeed. Very > | > unstructured symbolic code such as GCC and Go slow down and cause > | > Dynamo to give up. This is less promising; symbolic > | applications seem > | > like just the sort of thing we want to re-code in Haskell, as they > | > involve the complex data structure manipulation that is so > | much easier > | > with algebraic types. > | > > | > A good step toward trying this out, though, would be to devise a > | > trace-compilable bytecode for Haskell. I'
Re: runtime optimization of haskell
Good web developers do log analysis to figure out what users like and what they don't and to figure out which paths to shorten. It would be nice if haskell programs could generate similar "access logs" or traces that would feedback to the programmer or maintainer information about which functions were accessed when and how. The developer, knowing a lot about what is causing the user grief and the domain in which the user is spending most of his/her time could better optimize the interface and specification as well as the implementation of future versions. A trace-feedback implementation would be a great way to give developers feedback and would do much more to improve software than any type of rt optimization. -Alex-- _______ S. Alexander Jacobson Shop.Com 1-212-697-0184 voiceThe Easiest Way To Shop On Thu, 23 Mar 2000, Jan-Willem Maessen wrote: > I'm not currently working on dynamic compilation, but I went to > Vasanth Bala's talks at the Dynamic Compilation Workshop before PoPL > and later at MIT. I, too, was intrigued by the possibility of doing > this sort of dynamic compilation for functional languages. I've given > the technical points some thought, so I thought I'd share what I'd > come up with. > > First, Dynamo, Crusoe, and their ken focus on optimization of run-time > program traces. It was quite clear from Bala's talk that traces > should not be assumed to match function boundaries; we may compile > some trace through a function (the recursive case, for example) while > interpreting the rest (eg. the base case). The efficiency of the > technique depends on the fact that we compile _only_ the code which is > being frequently executed, as we have a limited cache to hold that > compiled code. > > The biggest promise of this approach is that it eliminates most > unknown control flow. In C code this is things like function return > (a trace doesn't necessarily end when a function returns), calls to > dynamically-linked functions, and calls to function pointers/virtual > functions. All of these are far more common in Haskell than in C; > this makes the technique look very attractive indeed. > > On the other hand, Dynamo at least does not always succeed. Very > unstructured symbolic code such as GCC and Go slow down and cause > Dynamo to give up. This is less promising; symbolic applications seem > like just the sort of thing we want to re-code in Haskell, as they > involve the complex data structure manipulation that is so much easier > with algebraic types. > > A good step toward trying this out, though, would be to devise a > trace-compilable bytecode for Haskell. I'm not sure that e.g. G-code > really fits the bill---we want a bytecode that we can unroll > dynamically without having to keep track a lot of context information > (such as what's on the stack). We would also want "usual case" > bytecodes that would verify unknown control flow against a guess at > the actual control flow. > > I suspect producing traces of pure bytecode which unroll control flow > with guesses ought to produce pretty good speedups when compared to > current bytecode interpreters (hugs, nhc); I'd be very interested to > see if this approach can compete with machine code that doesn't do any > guessing. In any case, this seems like a good proof of concept---if > we can't speed byte code up in this way, I wouldn't expect trace > compilation to do any better than any other kind of JIT compilation. > > I'd love to get the chance to do this experiment some day. > > -Jan-Willem Maessen > [EMAIL PROTECTED] > > > Here are some references, btw: > > Dynamo: http://www.hpl.hp.com/cambridge/projects/Dynamo > > Read the tech report from HP, as it contains a fair level of detail > and gives a pretty good feel for the strengths and weaknesses of the > approach. As I say, it really wins when there's lots of unknown > control flow (jumps to addresses in registers) that turns out to be > fairly predictable in practice. > > > The best chance I've seen at matching this statically was the GRIN > project, which Thomas Johnsson was pursuing a while back at Chalmers. > He used an exhaustive control-flow analysis to reduce each instance of > unknown control flow (eg. the forcing of a thunk) to a small set of > known destinations (eg. the possible thunks that actually reach that > program point). Combine that with interprocedural register allocation > (courtesy Urban Boquist) and you get the fastest Haskell code I've > seen. Any comments on this from the Chalmers folks? Alas I don't &
Re: Relection
This discussion feels like deja-vu all over again! What is wrong with the various generic programming extensions that have already been discussed? Derive, PolyP and their progeny? -Alex- ___ S. Alexander Jacobson Shop.Com 1-212-697-0184 voiceThe Easiest Way To Shop On Wed, 26 Jan 2000, Fergus Henderson wrote: > On 25-Jan-2000, Marcin 'Qrczak' Kowalczyk <[EMAIL PROTECTED]> wrote: > > Tue, 25 Jan 2000 18:12:32 +0100, jwit <[EMAIL PROTECTED]> pisze: > > > > > What I'm getting at is some kind of way to get your > > > hands on an abstract syntax representation of a Haskell > > > expression/datatype/module, modifying it, re-typechecking it, > > > and then transforming it back into a Haskell value. > > > > In other words you want to embed a Haskell compiler in your program. > > > > It could be useful in practice sometimes, but it's a very far concept > > from a simple extension of the language to allow reflection. > > > > Haskell is quite complex and can be compiled. IMHO the distance > > between the source code and the running program is very big in the > > case of Haskell. In other words, much is done at compile time instead > > of runtime. I like this principle, because it allows an elegant and > > powerful language with lots of static checking and optimizations, > > even though it makes reflection harder. > > > > A function definition is the specification of what an identifier > > means in its scope, not the physical bytes of its compiled or > > source code. It can be inlined, instantiated, unfolded, analyzed for > > strictness, rewritten using RULES etc. - it does not have to appear > > in a physical form that can be examined by the program. C's dlopen() > > is different because C is a more low level language - in C there is > > a closer correspondence between the source function and the execution > > time object. > > Well, Mercury has the same approach as Haskell, in the sense of being > a language which at least aims at being elegant and powerful with > lots of static checking and optimizations. But we do support > dynamic linking (on some platforms), using an interface built > on top of dlopen() and dlsym(). > > Supporting dynamic linking need not inhibit optimization to any > significant degree, I believe. At worst, you may need to disable > inter-module dead-function elimination. Even that need only be done > if dynamic linking is used. > > The next stage of support for reflection, being able to at run-time > get your hands on an abstract syntax representation of a Haskell > expression/datatype/module, does have a higher cost. The compiler > needs to keep information around at run-time which it would otherwise > not need. In Mercury we currently support that for types, but not > for expressions or modules. > > One way of reducing the cost of this feature is to require > programmers to explicitly mark with some declaration > those entities for which the compiler should keep the > information around at run-time. ISO Prolog takes this > approach; predicates for which you can use the > `clause/2' builtin to look up their definition > need to be declared "dynamic" using a `:- dynamic' declaration. > Ada takes the converse approach: by default, it keeps > around some tables to allow you to convert enumeration constants > into strings, but there is a standard pragma which allows > you to suppress that for a given enumeration. > > The final stage -- being able to take a representation of > a Haskell expression, re-typechecking it, and then transforming > it back into a Haskell value -- does not actually _require_ > any language or runtime support, I think; you can program > it in standard Haskell. Though I guess some support might be > required if you want to introduce new types at run-time. > > -- > Fergus Henderson <[EMAIL PROTECTED]> | "I have always known that the pursuit > WWW: <http://www.cs.mu.oz.au/~fjh> | of excellence is a lethal habit" > PGP: finger [EMAIL PROTECTED]| -- the last words of T. S. Garp. >
RE: drop & take [was: fixing typos in Haskell-98]
On Tue, 25 Jan 2000, BYRNE, Peter wrote: > (2) If take and drop are to be defined for negative integers, what happens > to > take (-n) xs > when n > len xs? Judging from the definitions proposed: > take (-5) [1..4] == [1,2,3] > and things look less useful than confusing. The python behavior is: take n list | length list + n < 0 = [] drop n list | length list + n < 0 = list I think this is the correct complement (dual?) of: take n list | length list - n < 0 = list drop n list | lenght list - n > 0 = [] (the current behavior) The consistent overall behavior on "abc" results in this: -3 -2 -1 0 1 2 3 take"" "a" "ab""abc" "a" "ab""abc" drop"abc" "bc""c" "" "bc""c" "" (take n list) ++ (drop n list) == list This is sane and consistent with itself and python. -Alex- ___________ S. Alexander Jacobson Shop.Com 1-212-697-0184 voiceThe Easiest Way To Shop > > -Original Message- > > From: S. Alexander Jacobson [SMTP:[EMAIL PROTECTED]] > > Sent: Tuesday, January 25, 2000 10:16 AM > > To: Tommy Thorn > > Cc: [EMAIL PROTECTED] > > Subject:Re: drop & take [was: fixing typos in Haskell-98] > > > > > IMHO, that would be the _insane_ definitions :-) Firstly, nothing > > > suggests to me that rationale of such behaviour. > > > > The rationale is: > > 1. these are useful functions > > 2. if this is insane, so is python. The corresponding python is: > > > > def take list n: return list[:n] > > def drop list n: return list[n:] > > > > Python interpreter example: > > >>> list="abcdef" > > >>> list[:-2] > > 'abcd' > > >>> list[-2:] > > 'ef' > > >>> > > > > 3. think of n as being calculated 'mod' length of the list > > take n list | n<0 = take (n `mod` (length list)) list > > drop n list | n<0 = drop (n `mod` (length list)) list > > --(equivalent definitions) > > > > > Secondly, it would mean loosing an important set of laws: > > > > > drop n . drop m === drop (n + m) > > > take n . take m === take (n + m) > > > (which, I note in passing, is broken also by suggestion A) > > > > All the proposals break this law as well, so I this argument is weak (if > > not insane :-)) > > > > -Alex- > > ___ > > S. Alexander Jacobson Shop.Com > > 1-212-697-0184 voiceThe Easiest Way To Shop > > > > > > On Mon, 24 Jan 2000, Tommy Thorn wrote: > > > > > S. Alexander Jacobson writes: > > > > The correct definitions would be: > > > > > > > > take -2 -- drops the last 2 elements from the list > > > >(takes everything except the last 2 elements) > > > > drop -2 -- grabs the last 2 elements from the list > > > >(drops everything except the last 2 elements) > > > > > > > These are also sane definitions.. > > > > > > > > > > > > Regards, > > > > > >Tommy > > > > > > > > > > > >
Re: drop & take [was: fixing typos in Haskell-98]
> IMHO, that would be the _insane_ definitions :-) Firstly, nothing > suggests to me that rationale of such behaviour. The rationale is: 1. these are useful functions 2. if this is insane, so is python. The corresponding python is: def take list n: return list[:n] def drop list n: return list[n:] Python interpreter example: >>> list="abcdef" >>> list[:-2] 'abcd' >>> list[-2:] 'ef' >>> 3. think of n as being calculated 'mod' length of the list take n list | n<0 = take (n `mod` (length list)) list drop n list | n<0 = drop (n `mod` (length list)) list --(equivalent definitions) > Secondly, it would mean loosing an important set of laws: > drop n . drop m === drop (n + m) > take n . take m === take (n + m) > (which, I note in passing, is broken also by suggestion A) All the proposals break this law as well, so I this argument is weak (if not insane :-)) -Alex- ___ S. Alexander Jacobson Shop.Com 1-212-697-0184 voice The Easiest Way To Shop On Mon, 24 Jan 2000, Tommy Thorn wrote: > S. Alexander Jacobson writes: > > The correct definitions would be: > > > > take -2 -- drops the last 2 elements from the list > >(takes everything except the last 2 elements) > > drop -2 -- grabs the last 2 elements from the list > >(drops everything except the last 2 elements) > > > These are also sane definitions.. > > > > Regards, > >Tommy >
RE: fixing typos in Haskell-98
Ok. so I got it backward. The functionality is still useful and belongs with take and drop. The correct definitions would be: take -2 -- drops the last 2 elements from the list (takes everything except the last 2 elements) drop -2 -- grabs the last 2 elements from the list (drops everything except the last 2 elements) drop n list | n<0 = drop (length list + n) list take n list | n<0 = take (length list + n) list These are also sane definitions.. -Alex- On Tue, 25 Jan 2000, Brian Boutel wrote: > On Tuesday, January 25, 2000 8:38 AM, S. Alexander Jacobson [SMTP:[EMAIL PROTECTED]] >wrote: > Why not do what python does? > > drop -2 -- drops the last 2 elements from the list > take -2 -- grabs the last 2 elements from the list > > take n list | n<0 = drop (length list + n) list > drop n list | n<0 = take (length list + n) list > > [...] > I think this solution also satisfies Chris Okasaki's: > > take n xs ++ drop n xs == xs > > (except where the list is infinite) > > try it: > take -2 [1,2,3,4] -> [3,4] > drop -2 [1,2,3,4] -> [1,2] > take -2 [1,2,3,4] ++ drop -2 [1,2,3,4] -> [3,4,1,2] > > --brian > ___ S. Alexander Jacobson Shop.Com 1-212-697-0184 voiceThe Easiest Way To Shop
Re: fixing typos in Haskell-98
Why not do what python does? drop -2 -- drops the last 2 elements from the list take -2 -- grabs the last 2 elements from the list take n list | n<0 = drop (length list + n) list drop n list | n<0 = take (length list + n) list If the list is an infinite list, the behavior is equivalent to B. If the list is finite, these are really useful primitives, like chop() in perl. I think this solution also satisfies Chris Okasaki's: take n xs ++ drop n xs == xs (except where the list is infinite) -Alex- _______ S. Alexander Jacobson Shop.Com 1-212-697-0184 voiceThe Easiest Way To Shop On Mon, 24 Jan 2000, Jon Fairbairn wrote: > > > Take and drop > > > [..] > > > I can see three alternatives: > > > > > > (A) Make them defined for any n. If n < 0, do something reasonable: > > > take: give empty list > > > drop: give whole list > > > > > > (B) Make them defined for n > length xs, but fail for n < 0. > > > > > > (C) Status quo > > > > > > PROPOSAL: Use alternative (A) > > > > I vote for (B). > > Ditto, though I would say that the restriction is that the > argument should be a natural number, and its simply a > failing of the type system that it cannot express this. > > -- > Jón Fairbairn [EMAIL PROTECTED] > 18 Kimberley Road[EMAIL PROTECTED] > Cambridge CB4 1HH +44 1223 570179 (pm only, please) > > >
Re: deleteBy
Can we stop polluting the namespace with list based function definitions? Most of these functions: delBy, filter, map, concat, length, take, takeWhile, etc. are well specified for data structures other than lists. Regardless of whether Haskell includes generic programming extensions, it would be nice to be able to use these function names for the same operation in other datastructures (even if they are implemented manually). -Alex- On Tue, 7 Dec 1999, S.D.Mechveliani wrote: > To my proposal to add to Haskell-2 library > delBy :: (a -> Bool) -> [a] -> [a] > > Keith Wansbrough <[EMAIL PROTECTED]> > writes on 06 Dec 1999 > > > So what do you propose as the definition for > > > > del :: (Eq a) => [a] -> [a] > > > > ? > > > > Section 7.6 of the Library Report: > > > > "By convention, overloaded functions have a non-overloaded > > counterpart whose name is suffixed with ``By''." > > > Does this really imply that having delBy, the library has > necessarily to include `del' ? > Anyway, let us try to improve the proposal: > --- > (1) To change in the `List' library the deleteBy definition to > > deleteBy :: (a -> Bool) -> [a] -> [a] > deleteBy _ [] = [] > deleteBy p (a:as) = if p a then as else a:(deleteBy p as) > > deleteBy of Haskell-98 to rename to > delByR2e (delete by binary relation (a->a->Bool) and element a). > > - or better to remove it at all. For its type looks esoteric. > I wonder, who ever uses it. > > (2) To change in the Section 7.6 of the Library Report > " > By convention, overloaded functions have a non-overloaded > counterpart whose name is suffixed with ``By''. > " > to > " > By convention, an overloaded function may have several > non-overloaded counterparts whose names are suffixed with > `By'[ss]. > Examples: By means "by something simplest", > ByPby predicate, > ByR2 by binary relation, > ... > " > - > > > -- > Sergey Mechveliani > [EMAIL PROTECTED] > > > ___ S. Alexander Jacobson Shop.Com 1-212-697-0184 voiceThe Easiest Way To Shop
Re: RE to Peyton-Jones
It is not obvious that Haskell provides an order of magnitude improvement in any of these areas. Where I think Haskell (or Haskell compiler writers), could really be useful is in providing better XML transformation languages and implementations. XML Schemas are emerging as the defacto type system for the Internet. XSLT is the w3 designed functional programming language for processing XML. It seems like the people in this community know more about the "right" way to design and implement an XML transformation language than anyone else. For example, XSLT does not easily support fold-type operations for accumulating report summaries, though this is exactly the type of task that an XML transformation language should be designed to accomplish. In addition XSLT implementations are INCREDIBLY SLOW. A funcitional programming language that allows speedy processing of XML datastructures would be a big win here. I know that this may not be Haskell as it is currently conceived, but I assume that the people here know more about the right answers than anyone else. -Alex- _______ S. Alexander Jacobson Shop.Com 1-212-697-0184 voiceThe Easiest Way To Shop On Thu, 25 Nov 1999, Eduardo Costa wrote: > >| I mean, a group who could produce a > >| competitive compiler, useful not only to people who are > >| interested in testing > >| the language, but also in using it to produce commercial and > >| industrial tools. > > > >I think that would be absolutely splendid and I would do whatever > >I could to support such a person/group/company. > > > >Simon > > Then, don t you think that, if we contact software companies, we > could find some one who would buy the idea? Of course, I would > not be able to sell the idea, but there are people in the Haskell > community with a better chance to get commercial and industrial > support for Haskell. As far as I can see, Haskell is a good product > (I mean, from the point of view of an investitor). > It is used in many universities, even in my country (for instance, > it is used as introductory language in the Computer Science > Department of my own university). With a little make up, things > like Zermello-Frankel notation would give a good replacement > for SQL. A good computer algebra library (like the one that > prof. R. Malaquias is creating) would make Haskell a good > scripting language to replace things like Mathlab, Maple, etc. > I really think that it is possible to lure a software company > into investing in Haskell. > > You could say that it would be better to have groups > of voluntary programmers (like the people who created Linux > and GNU), instead of companies like Microsoft. Well, I guess > that Haskell has atractive features to these groups too. For instance, > Haskell could be used to produce a free version of Maple, > Matlab, or even Labview. > > I want to suggest to the Haskell community > the creation of something like a public relation interest group, > who would search support from the software industry, and > from strong teams of voluntary workers(GNU, Linux, etc.) > > EdCosta > > > This e-mail has been sent to you courtesy of OperaMail, as a free service from > Opera Software, makers of the award-winning Web Browser, Opera. Visit us at > http://www.opera.com/ or our portal at: http://www.myopera.com/ Your free e-mail > account is waiting at: http://www.operamail.com/ > > >
XSLT: Is assignment really necessary for performance?
In this article (http://www.mulberrytech.com/xsl/xsl-list/archive/msg07336.html), Clark Evans claims that XSLT is too slow or too memory intensive because it is a functional language. He suggests adding Hashtables to XSLt to improve performance. Is this really necessary? Is the problem functional programming or something broken in the design of the XSLt lanuage? Backround: XSLt is a functional programming language designed to process XML documents. It is still in the draft stage. When it becomes a standard, XSLt functional programming is likely to be one of the dominant forms of Internet programming. We are already using XSLt here to manage generating the site. -Alex- ___ S. Alexander Jacobson Shop.Com 1-212-697-0184 voiceThe Easiest Way To Shop
Re: OO in Haskell
On Tue, 5 Oct 1999, George Russell wrote: > Perhaps I'm being stupid. (It certainly wouldn't be the first time!) > But what does OO give me that I can't get with existential types > (in datatype definitions) and multiparameter type classes? The latter > seem to me much more powerful, since I can add dancing and singing methods > to objects without having to go back to the original class definition. Encapsulation of state. -Alex- _______ S. Alexander Jacobson Shop.Com 1-212-697-0184 voiceThe Easiest Way To Shop
CPP is not part of Haskell
The Haskell report says that, by convention, only _HASKELL_ files have a .hs or .lhs extension. --http://www.haskell.org/onlinereport/literate.html-- By convention, the style of comment is indicated by the file extension, with ".hs" indicating a usual Haskell file and ".lhs" indicating a literate Haskell file -- It is always a surprise when you try to load a .hs or .lhs file in hugs and discover that it really requires cpp. Either, cpp (or some preprocessor standard), should be made part of the Haskell language definition or Haskell files that require a preprocessor should have a different extension. Since, I assume that the hugs team has a good reason not to build in cpp functionality, I am suggesting that Haskell files with preprocessor directives have the extension .hs.cpp One could argue that Hugs is actualy more flexible because it supports any arbitrary preprocessor, but the substantive issue is that running each imported script through python & cpp adds substantial load time. This load time is much less acceptable in an interpreted environment, than in the compile time environment of GHC. In this environment, it is much better to "compile" .hs.cpp files into .hs files before runtime. In the case of Xtract, we are renaming all the files to .hs.cpp, generating a clean set of .hs files, and then running hugs. The alternative is just too slow. -Alex- _______ S. Alexander Jacobson Shop.Com 1-212-697-0184 voiceThe Easiest Way To Shop
Xtract with hugs and Haskell implementation inconsistency
I am trying to get Malcolm Wallace's Xtract code to run with Hugs and have noticed a few inconsistencies between whatever compiler he is using and hugs... 1. Where `maybe` is exported His compiler does not export `maybe` from the Maybe library but Hugs does. The report says that Maybe should be exported from both. I think the report is wrong and that `maybe` should only be exported from Maybe. In any case, Hugs correctly implements H98 and his compiler does not. (MW, you need to hide maybe in ParseSTXml.hs ParseSTHtml.hs and PPSTXmlNew.hs) 2. Allowing use of qualified names without importing qualified MW's compiler allows use of Prelude.elem without qualified import of the prelude. Hugs does not like that. I think Hugs is right, but I am not sure which is "correct". (MW, the offending line is in Combinators.hs on line 169) 3. Inadequacy of Haskell There are a few places where MW relies on the preprocessor rather than Haskell. Xtract has this code: -From ParseSTXml.hs #define PARSEARG(tok) P (\st inp -> case inp of { \ ((p,tok n):ts) -> [(n,st,ts)]; \ ts -> [] } ) name :: Parser SymTab Token Name name = PARSEARG(TokName) string, freetext :: Parser SymTab Token String string = PARSEARG(TokName) freetext = PARSEARG(TokFreeText) --- Is there clean a way of achieving the same functionality without relying on the preprocessor? -Alex- PS If you want to use HUGS to test, you need to pass HUGS a preprocessor: runhugs +98 -F"python hpp.py" XTract.hs Here is a short preprocessor in Python: ---hpp.py- import sys,string,os; cpp = 'gcc -E -xc -traditional -D__HASKELL_98__ '+sys.argv[1]; for line in os.popen(cpp).readlines()[1:]: if line: print line[:-1]; ___________ S. Alexander Jacobson Shop.Com 1-212-697-0184 voiceThe Easiest Way To Shop