Re: [GHC] #2047: ghc compiled program crashes with segfault when using -M and/or -c

2008-01-16 Thread GHC
#2047: ghc compiled program crashes with segfault when using -M and/or -c
+---
 Reporter:  mte |  Owner: 
 Type:  bug | Status:  new
 Priority:  normal  |  Milestone:  6.8.3  
Component:  Runtime System  |Version:  6.8.2  
 Severity:  critical| Resolution: 
 Keywords:  gc segfault | Difficulty:  Unknown
 Testcase:  |   Architecture:  x86
   Os:  Windows |  
+---
Changes (by simonmar):

  * difficulty:  = Unknown
  * milestone:  = 6.8.3

Comment:

 This certainly sounds like a bug in the compacting GC.  Without means to
 reproduce it though, it's impossible to diagnose.

 If you could reproduce the error in non-proprietary code, or somehow
 arrange to give me the code with a no-redistribution license, that would
 help a lot.

 If you need to use `-M` without the compacting GC, the workaround is to
 add `-c100`, which should disable the automatic compacting GC threshold.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/2047#comment:1
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #459: Bad parse error message

2008-01-16 Thread GHC
#459: Bad parse error message
---+
 Reporter:  nobody |  Owner:  simonmar
 Type:  bug| Status:  new 
 Priority:  normal |  Milestone:  _|_ 
Component:  Compiler (Parser)  |Version:  6.4.1   
 Severity:  minor  | Resolution:  None
 Keywords: | Difficulty:  Unknown 
 Testcase: |   Architecture:  Unknown 
   Os:  Unknown|  
---+
Changes (by simonmar):

  * priority:  low = normal

Comment:

 See also #2046.  Raising priority based on feedback.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/459#comment:4
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #2046: Parse errors for mismatched brackets are awful

2008-01-16 Thread GHC
#2046: Parse errors for mismatched brackets are awful
---+
 Reporter:  NeilMitchell   |  Owner:   
 Type:  feature request| Status:  closed   
 Priority:  normal |  Milestone:   
Component:  Compiler (Parser)  |Version:  6.6.1
 Severity:  minor  | Resolution:  duplicate
 Keywords: | Difficulty:  Unknown  
 Testcase: |   Architecture:  Unknown  
   Os:  Unknown|  
---+
Changes (by simonmar):

  * component:  Compiler = Compiler (Parser)
  * difficulty:  = Unknown
  * status:  new = closed
  * resolution:  = duplicate

Comment:

 This is a long-standing problem, see #459 :-)

 The possibly incorrect indentation message arises because the token the
 parser is looking at is one that was generated by layout.  I honestly
 don't know whether there's anything we can easily do to improve matters in
 the context of the current parser, but I suspect not.

 Anyway, I propose to look at it next time I'm in the area.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/2046#comment:2
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #2021: let ghc find framework header files and link with frameworks located in $HOME/Library/Frameworks

2008-01-16 Thread GHC
#2021: let ghc find framework header files and link with frameworks located in
$HOME/Library/Frameworks
-+--
 Reporter:  maeder   |  Owner: 
 Type:  feature request  | Status:  new
 Priority:  normal   |  Milestone:  6.10 branch
Component:  Compiler |Version:  6.8.2  
 Severity:  normal   | Resolution: 
 Keywords:   | Difficulty:  Easy (1 hr)
 Testcase:   |   Architecture:  Multiple   
   Os:  MacOS X  |  
-+--
Comment (by maeder):

 Delete and link with frameworks located in $HOME/Library/Frameworks from
 the description of this ticket.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/2021#comment:12
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1012: ghc panic with mutually recursive modules and template haskell

2008-01-16 Thread GHC
#1012: ghc panic with mutually recursive modules and template haskell
--+-
 Reporter:  guest |  Owner: 
 Type:  bug   | Status:  reopened   
 Priority:  normal|  Milestone:  6.10 branch
Component:  Template Haskell  |Version:  6.8.2  
 Severity:  normal| Resolution: 
 Keywords:| Difficulty:  Unknown
 Testcase:  TH_import_loop|   Architecture:  Multiple   
   Os:  Multiple  |  
--+-
Changes (by simonpj):

  * milestone:  _|_ = 6.10 branch

Comment:

 Fair enough. I have taken a little look at this, based on fons's
 suggestion ``every module M that depends on a module C in a cycle, but is
 not a member of that cycle, should have an implicit dependency on each of
 the modules C1.. Cn in the cycle.``. Yes, I think that would not be too
 hard to do.  There are two places to think about:

  * `ghc --make`: When deciding the up-sweep order, first do a SCC analysis
 finding strongly connected components of modules, and top-sort those
 components.  Then linearise each component. That gives a linear order that
 respects fons's suggestion.

  * `ghc -M`: similar story, but less neat.  We have to emit lots of extra
 dependencies in the `makefile`, so that M depends on C1..Cn.

 Not very hard, but more than an hours work.  Let's do it for 6.10.

 Simon

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1012#comment:14
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #2046: Parse errors for mismatched brackets are awful

2008-01-16 Thread GHC
#2046: Parse errors for mismatched brackets are awful
---+
 Reporter:  NeilMitchell   |  Owner:   
 Type:  feature request| Status:  closed   
 Priority:  normal |  Milestone:   
Component:  Compiler (Parser)  |Version:  6.6.1
 Severity:  minor  | Resolution:  duplicate
 Keywords: | Difficulty:  Unknown  
 Testcase: |   Architecture:  Unknown  
   Os:  Unknown|  
---+
Comment (by NeilMitchell):

 A simple fix would be that if the parsing fails with a possibly incorrect
 indentation style warning, you rerun the lexer over the original file,
 without inserting symbols based on indentation. After you have this raw
 lexical stream, you do a simple scan with a bracket stack and check
 everything matches up. If it doesn't, you give a very precise error
 indicating exactly which bracket was not match. If it does, you fall back
 to the same situation we have now.

 Assuming the lexer is sufficiently modular, this should be easy (at a
 guess), and not complicate any other part of the compiler.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/2046#comment:3
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #2021: let ghc find framework header files and link with frameworks located in $HOME/Library/Frameworks

2008-01-16 Thread GHC
#2021: let ghc find framework header files and link with frameworks located in
$HOME/Library/Frameworks
-+--
 Reporter:  maeder   |  Owner: 
 Type:  feature request  | Status:  new
 Priority:  normal   |  Milestone:  6.10 branch
Component:  Compiler |Version:  6.8.2  
 Severity:  normal   | Resolution: 
 Keywords:   | Difficulty:  Easy (1 hr)
 Testcase:   |   Architecture:  Multiple   
   Os:  MacOS X  |  
-+--
Comment (by judah):

 Replying to [comment:11 maeder]:
  Check at least in my (recently attached) version that passes -F properly
 to gcc and ld for ghc-6.8.3, I've removed checking the home directory, so
 the above objections are no longer valid.

 If I understand correctly, that latest version fixes bug #1975 (see that
 ticket for a test case).  For organization's sake, can we keep this ticket
 about adding `$HOME/Library/Frameworks`, and use #1975 to track that bug
 and its fixes?

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/2021#comment:13
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #2021: let ghc find framework header files and link with frameworks located in $HOME/Library/Frameworks

2008-01-16 Thread GHC
#2021: let ghc find framework header files and link with frameworks located in
$HOME/Library/Frameworks
-+--
 Reporter:  maeder   |  Owner: 
 Type:  feature request  | Status:  new
 Priority:  normal   |  Milestone:  6.10 branch
Component:  Compiler |Version:  6.8.2  
 Severity:  normal   | Resolution: 
 Keywords:   | Difficulty:  Easy (1 hr)
 Testcase:   |   Architecture:  Multiple   
   Os:  MacOS X  |  
-+--
Comment (by maeder):

 I've missed #1975, so go ahead and fix it

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/2021#comment:14
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1826: unable to list source for exception thrown should never occur

2008-01-16 Thread GHC
#1826: unable to list source for exception thrown should never occur
-+--
 Reporter:  guest|  Owner: 
 Type:  feature request  | Status:  new
 Priority:  normal   |  Milestone:  6.8.3  
Component:  GHCi |Version:  6.8.1  
 Severity:  normal   | Resolution: 
 Keywords:   | Difficulty:  Easy (1 hr)
 Testcase:   |   Architecture:  Multiple   
   Os:  Multiple |  
-+--
Comment (by igloo):

 I've improved the error, but I'm not sure how to tell if we are running
 with `:trace`; is there an easy way?

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1826#comment:5
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


[GHC] #2049: GHCi doesn't fully load previously broken modules

2008-01-16 Thread GHC
#2049: GHCi doesn't fully load previously broken modules
+---
Reporter:  ajd  |   Owner: 
Type:  bug  |  Status:  new
Priority:  normal   |   Component:  GHCi   
 Version:  6.8.2|Severity:  normal 
Keywords:   |Testcase: 
Architecture:  Unknown  |  Os:  Unknown
+---
 Scenario:

 I have a project with three modules: Mod1, Mod2 and Mod3. Mod3 calls an
 undefined function and so shouldn't load. I call GHCi as

 ghci Mod3.hs

 The output looks like this:

  GHCi, version 6.8.2: http://www.haskell.org/ghc/  :? for help
  Loading package base ... linking ... done.
  [1 of 3] Compiling Mod1 ( Mod1.hs, interpreted )
  [2 of 3] Compiling Mod2 ( Mod2.hs, interpreted )
  [3 of 3] Compiling Mod3 ( Mod3.hs, interpreted )

  Mod3.hs:6:6: Not in scope: `foo'

  Mod3.hs:6:12: Not in scope: `barf'
  Failed, modules loaded: Mod2, Mod1.

 The problem comes when I edit the file using GHCi's :e command and then
 reload with :r. The output looks like this (which is strange in itself,
 because the modules are listed in the wrong order):

  *Mod2 :r
  [3 of 3] Compiling Mod3 ( Mod3.hs, interpreted )
  Ok, modules loaded: Mod2, Mod3, Mod1.

 The problem is this: when I call a function from Mod3 from the GHCi
 toplevel, I get a not in scope error. I can call the function as
 Mod3.function name but Mod3 does not show up in GHCi's tab
 completion. I don't believe that this is the correct behavior.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/2049
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


[GHC] #2050: GHCi should keep a persistent history file

2008-01-16 Thread GHC
#2050: GHCi should keep a persistent history file
+---
Reporter:  ajd  |   Owner: 
Type:  feature request  |  Status:  new
Priority:  normal   |   Component:  GHCi   
 Version:  6.8.2|Severity:  normal 
Keywords:   |Testcase: 
Architecture:  Unknown  |  Os:  Unknown
+---
 It would be nice if GHCi kept a persistent history of commands like bash
 does. This would be especially useful in testing: if one is trying to get
 a certain command to work, and the command is at all complicated, it is
 annoying to have to copy and paste or retype the command every time you
 want to see if the function works.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/2050
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #2050: GHCi should keep a persistent history file

2008-01-16 Thread GHC
#2050: GHCi should keep a persistent history file
+---
Reporter:  ajd  |Owner: 
Type:  feature request  |   Status:  new
Priority:  normal   |Milestone: 
   Component:  GHCi |  Version:  6.8.2  
Severity:  normal   |   Resolution: 
Keywords:   | Testcase: 
Architecture:  Unknown  |   Os:  Unknown
+---
Comment (by judah):

 I have also often wished for this.  The header `readline/history.h`
 provides the functions `read_history` and `write_history` which are also
 present in editline.  This task will be easy to implement if we add those
 to the readline and editline packages.  (Although, a pure Haskell
 implementation would probably also be pretty easy to write.)

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/2050#comment:2
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #2050: GHCi should keep a persistent history file

2008-01-16 Thread GHC
#2050: GHCi should keep a persistent history file
+---
Reporter:  ajd  |Owner:  ajd
Type:  feature request  |   Status:  new
Priority:  normal   |Milestone: 
   Component:  GHCi |  Version:  6.8.2  
Severity:  normal   |   Resolution: 
Keywords:   | Testcase: 
Architecture:  Unknown  |   Os:  Unknown
+---
Changes (by ajd):

  * owner:  = ajd

Comment:

 I think System.Posix.Readline already has a binding to those functions via
 the addHistory function. I just finished writing a simple implementation;
 I'll post a patch when it builds and tests.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/2050#comment:3
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: GHC Data.List.sort performance question

2008-01-16 Thread Marcus D. Gabriel

Hello Ian,

Thank you for the two links to the previous threads, I glanced at them 
and will
read them more carefully later so that I can understand the point a 
little better.

Sorry about the duplicate thread relative to the Haskell Cafe.

Thanks also for applying Bertram's patch.

Cheers,
- Marcus

Ian Lynagh wrote:

Hi Marcus,

On Mon, Jan 14, 2008 at 10:01:49PM +0100, Marcus D. Gabriel wrote:
  

code in libraries/base/Data/List.hs
for merge is

merge cmp xs [] = xs
merge cmp [] ys = ys

merge cmp [] ys = ys
merge cmp xs [] = xs



This actually came up a while ago, in this thread:
http://thread.gmane.org/gmane.comp.lang.haskell.cafe/30598

I've just applied Bertram's patch from
http://www.haskell.org/pipermail/libraries/2007-November/008621.html
which makes the change you suggest.


Thanks
Ian

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


  



--
 Marcus D. Gabriel, Ph.D.Email:[EMAIL PROTECTED]
 213 ter, rue de Mulhouse  Tel: +33.3.89.69.05.06
 F68300 Saint Louis  FRANCE   Portable: +33.6.34.56.07.75


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Integrating editline with ghc

2008-01-16 Thread Judah Jacobson
Hi all,

I have managed to build ghc using the initial release of the editline package:

Hackage link: 
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/editline-0.1
Haddock: http://code.haskell.org/editline/dist/doc/html/editline/

As I've mentioned before, there are two independent modules:
- System.Console.Editline is a very basic (and experimental) interface
to the native editline APIs.
- System.Console.Editline.Readline contains the readline APIs provided
by the editline library (mostly a cut/paste of
System.Console.Readline).

Currently I'm using just the latter as a drop-in replacement for
System.Console.Readline in ghci.  I have added a --with-editline flag
to ghc's configure script, which has no effect if it's not specified,
and otherwise does the following:

- Throw an error (at configure time) if editline isn't present (as
$hardtop/libraries/editline)
- Use the editline package instead of readline when building ghc stage 2
- Use CPP to make InteractiveUI.hs (the main ghci loop) import
System.Console.Editline.Readline instead of System.Console.Readline.

Does that sound like the right way to handle this?  If so, I'll send a
darcs patch.

Also, should editline be made a boot-package or an extra-package (or neither)?

Thanks,
-Judah
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: bindist for Intel MacOS X 10.4 (Tiger) with static libs

2008-01-16 Thread Manuel M T Chakravarty

Thorkil Naur:

Hello,

On Tuesday 08 January 2008 15:07, Christian Maeder wrote:

Hi,

I've succeeded in building a binary distribution that uses static
libraries for gmp and readline. libreadline.a, libncurses.a and  
libgmp.a

with corresponding header files are included. (For license issues ask
someone else.)


On http://gmplib.org/ we find:

GMP is distributed under the GNU LGPL. This license makes the  
library free to
use, share, and improve, and allows you to pass on the result. The  
license
gives freedoms, but also sets firm restrictions on the use with non- 
free

programs.

I have not attempted to check whether your distribution fulfills the
requirements of the LGPL.


It does fullfil them.  The source code of all components of the system  
is available enabling users to build the same software with a  
different version of GMP.  That's all that the LGPL requires of  
software linked against a LGPL library.


Further, on http://cnswww.cns.cwru.edu/php/chet/readline/rltop.html:

Readline is free software, distributed under the terms of the GNU  
General
Public License, version 2. This means that if you want to use  
Readline in a
program that you release or distribute to anyone, the program must  
be free

software and have a GPL-compatible license.

For your distribution to adhere to this, it appears to require GHC  
to have a

GPL-compatible license. I don't believe it does.


It does.  GHC's codebase is a mix of BSD3, LGLP, and GPL.  They are  
perfectly compatible.  See http://www.fsf.org/licensing/licenses/index_html 
.


Manuel
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Re[4]: bindist for Intel MacOS X 10.4 (Tiger) with static libs

2008-01-16 Thread Manuel M T Chakravarty

Bulat Ziganshin:

for me, GMP is much more problematic issue. strictly speaking, we
can't say that GHC is BSD-licensed because it includes LGPL-licensed
code (and that much worse, it includes this code in run-time libs)


Of course, GHC is BSD3 licensed.  It includes the GMP code as part of  
its tar ball to save people from the hassle to separately install GMP  
on platforms that don't have it by default (ie, essentially all non- 
Linux OSes). That doesn't change the license of GHC at all.  It is a  
mere aggregation of different projects.


Even binary distributions of GHC that include libgmp.a and statically  
link it into compiled code are not a problem.  You may even use such  
GHC distributions to compile proprietary code and distribute it.  All  
that is needed to make this legal is to (a) properly acknowledge the  
use of GMP in the code and (b) give users access to another version of  
the proprietary program that links GMP dynamically.  Point (b) is  
sufficient to comply with Section 4(d) of the LGPL, which requires you  
to enable users to swap one version of GMP for another in a program  
that uses GMP.


Manuel

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Integrating editline with ghc

2008-01-16 Thread Manuel M T Chakravarty

Judah Jacobson:

Hackage link: 
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/editline-0.1
Haddock: http://code.haskell.org/editline/dist/doc/html/editline/

As I've mentioned before, there are two independent modules:
- System.Console.Editline is a very basic (and experimental) interface
to the native editline APIs.
- System.Console.Editline.Readline contains the readline APIs provided
by the editline library (mostly a cut/paste of
System.Console.Readline).

Currently I'm using just the latter as a drop-in replacement for
System.Console.Readline in ghci.  I have added a --with-editline flag
to ghc's configure script, which has no effect if it's not specified,
and otherwise does the following:

- Throw an error (at configure time) if editline isn't present (as
$hardtop/libraries/editline)
- Use the editline package instead of readline when building ghc  
stage 2

- Use CPP to make InteractiveUI.hs (the main ghci loop) import
System.Console.Editline.Readline instead of System.Console.Readline.

Does that sound like the right way to handle this?  If so, I'll send a
darcs patch.


Sounds good to me.

Also, should editline be made a boot-package or an extra-package (or  
neither)?


Given that we like this to be the default on some platforms, I believe  
it belongs into boot-packages (just like readline).


Manuel

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Integrating editline with ghc

2008-01-16 Thread Thorkil Naur
Hello,

On Wednesday 16 January 2008 22:05, Judah Jacobson wrote:
 Hi all,
 
 I have managed to build ghc using the initial release of the editline 
package:
 
 Hackage link: 
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/editline-0.1
 Haddock: http://code.haskell.org/editline/dist/doc/html/editline/
 
 As I've mentioned before, there are two independent modules:
 - System.Console.Editline is a very basic (and experimental) interface
 to the native editline APIs.
 - System.Console.Editline.Readline contains the readline APIs provided
 by the editline library (mostly a cut/paste of
 System.Console.Readline).

Excellent!

 
 Currently I'm using just the latter as a drop-in replacement for
 System.Console.Readline in ghci.  I have added a --with-editline flag
 to ghc's configure script, which has no effect if it's not specified,
 and otherwise does the following:
 
 - Throw an error (at configure time) if editline isn't present (as
 $hardtop/libraries/editline)

That's the way.

 - Use the editline package instead of readline when building ghc stage 2
 - Use CPP to make InteractiveUI.hs (the main ghci loop) import
 System.Console.Editline.Readline instead of System.Console.Readline.
 
 Does that sound like the right way to handle this?  If so, I'll send a
 darcs patch.

An alternative that would make the GHC configure script more symmetric with 
respect to command line editor would be to have --with-line-editor=editline, 
--with-line-editor=readline and also, perhaps, --with-line-editor=none (or 
even --with-line-editor=). All of these with, hopefully, obvious meanings. On 
top of this, one could have --with-edit-line=automatic with some automatic 
selection taking place. And the default? I'm sure that my favorite 
--with-line-editor=none will not be considered practical, so I will leave 
this most difficult choice to others.

 
 Also, should editline be made a boot-package or an extra-package (or 
neither)?
 
 Thanks,
 -Judah
 ...

Thanks a lot again.

Best regards
Thorkil
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: [Haskell] Should the following program be accepted by ghc?

2008-01-16 Thread Simon Peyton-Jones
| I have been playing with ghc6.8.1 and type families and the following
| program is accepted without any type-checking error:

Martin's comments are spot on.

FWIW, in the HEAD -- as Stefan says, type families are not *supposed* to work 
in 6.8.1 -- your program gives

TF.hs:9:7:
Couldn't match expected type `a' against inferred type `b'
  `a' is a rigid type variable bound by
  the type signature for `c' at TF.hs:8:7
  `b' is a rigid type variable bound by
  the type signature for `c' at TF.hs:8:15
  Expected type: a :=: b
  Inferred type: b :=: b
In the expression: Eq
In the definition of `c': c Eq = Eq

That's fair enough.  If you change K to be a 'data family', then decomposition 
works, and the program compiles.

Bugs in type families against the HEAD are, as Don says, highly welcome.

Simon
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Should the following program be accepted by ghc?

2008-01-16 Thread J.N. Oliveira


On Jan 16, 2008, at 2:08 AM, Bruno Oliveira wrote:


Hello,

I have been playing with ghc6.8.1 and type families and the  
following program is accepted without any type-checking error:



data a :=: b where
   Eq :: a :=: a



decomp :: f a :=: f b - a :=: b
decomp Eq = Eq


However, I find this odd because if you interpret f as a function  
and :=: as equality, then this is saying that


if f a = f b then a = b


This is saying that  f  is injective. So perhaps the standard  
interpretation leads implicitly to this class of functions.


Cheers

jno

smime.p7s
Description: S/MIME cryptographic signature
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Re: [Haskell-cafe] ANNOUNCE: HStringTemplate -- An Elegant, Functional, Nifty Templating Engine for Haskell

2008-01-16 Thread Bit Connor
On Jan 14, 2008 9:47 AM, Sterling Clover [EMAIL PROTECTED] wrote:
 HStringTemplate is a port of Terrence Parr's lovely StringTemplate
 (http://www.stringtemplate.org) engine to Haskell.

 It is available, cabalized, at:
 darcs get http://code.haskell.org/HStringTemplate/

Template systems have been a crucial missing part of Haskell web
development. I am very happy to hear about this project, and will
definitely be looking at this in the near future!

Thanks,
Bit
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Should the following program be accepted by ghc?

2008-01-16 Thread Derek Elkins
On Wed, 2008-01-16 at 09:18 +, J.N. Oliveira wrote:
 On Jan 16, 2008, at 2:08 AM, Bruno Oliveira wrote:
 
  Hello,
 
  I have been playing with ghc6.8.1 and type families and the  
  following program is accepted without any type-checking error:
 
  data a :=: b where
 Eq :: a :=: a
 
  decomp :: f a :=: f b - a :=: b
  decomp Eq = Eq
 
  However, I find this odd because if you interpret f as a function  
  and :=: as equality, then this is saying that
 
  if f a = f b then a = b
 
 This is saying that  f  is injective. So perhaps the standard  
 interpretation leads implicitly to this class of functions.

Just like data constructors, type constructors are injective. f a
doesn't simplify and so essentially you have structural equality of the
type terms thus f a = f b is -equivalent- to a = b.  Obviously type
functions change this, just like normal value functions do at the value
level.

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] ANN: 16 updated packages: HDBC, ConfigFile, MissingH, and more

2008-01-16 Thread John Goerzen
Hi folks,

I've made new releases of a number of my packages.  Rather than flood
the list with individual announcements, I've consolidated them here.
Many of these packages have only GHC 6.8 compatibility updates, but a
fair number also have more significant changes.  Detailed changes
lists are below.

These packages have been updated:

anydbm
ConfigFile
darcs-buildpackage
dfsbuild
HDBC
HDBC-odbc
HDBC-postgresql
HDBC-sqlite3
hg-buildpackage
HSH
hslogger
LDAP
ListLike
magic
MissingH
srcinst

These have not yet been updated:

arch2darcs (no longer maintained)
gtkrsync   (pending newer GTK upload in Debian)
hpodder(pending newer haxml upload in Debian)
missingpy  (pending available time)

All packages have been uploaded to:

 * Hackage
 * Debian sid
 * Their software.complete.org site, if any
 * Their darcs or Mercurial repository

anydbm 1.0.5

A generic interface library for DBM-like databases

Update for GHC 6.8.x.

Homepage: http://software.complete.org/anydbm

ConfigFile 1.0.4

Library for reading/writing human-editable configuration files

Updated for GHC 6.8 compatibility and the new MissingH

Homepage: http://software.complete.org/configfile

darcs-buildpackage 0.5.12
-
Tools for building Debian packages with source stored in Darcs

Updated for GHC 6.8 compatibility and the new MissingH

Homepages:
  Darcs repo: http://darcs.complete.org/darcs-buildpackage
  Debian: http://packages.qa.debian.org/darcs-buildpackage

dfsbuild 1.0.2
--
Tool for building Debian From Scratch bootable ISO images

* Update for GHC 6.8.x

Homepage:
  http://people.debian.org/~jgoerzen/dfs

HDBC 1.1.4
--
Haskell DataBase Connectivity, generic RDBMS access

* Update cabal for GHC 6.8 thanks to Paulo Tanimoto

Homepage: http://software.complete.org/hdbc

HDBC-odbc 1.1.4.0
-
HDBC driver for ODBC connectivity

* Updates for GHC 6.8 thanks to Bjorn Bringert and John Goerzen

Homepage: http://software.complete.org/hdbc-odbc

HDBC-postgresql 1.1.4.0
---
HDBC driver for PostgreSQL

* Make sure Hackage tarball gets all files in package

* Update for GHC 6.8 thanks to Duncan Coutts

* Now use pg_config to find PostgreSQL library locations automatically
  thanks to Duncan Coutts

Homepage: http://software.complete.org/hdbc-postgresql

HDBC-sqlite3 1.1.4.0

HDBC driver for Sqlite v3

* Cabal updates for GHC 6.8 thanks to Duncan Coutts and Paulo Tanimoto

* Allow statements to remain prepared after execution has finished
  thanks to Toby Allsopp

Homepage: http://software.complete.org/hdbc-sqlite3

hg-buildpackage 1.0.4
-
Tools for building Debian packages with source in Mercurial

Updates for compatibility with GHC 6.8, newest MissingH and HSH

Homepages: 

  Mercurial repo: http://hg.complete.org/hg-buildpackage
  Hackage: 
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hg-buildpackage-1.0.4
  Debian: http://packages.debian.org/hg-buildpackage

  [ no trac instance ]
   
HSH 1.2.5
-
Library for easy shell-like scripting in Haskell

* Clean ups and doc updates [jgoerzen]

* Update for newer MissingH [jgoerzen]

* Update .cabal for GHC 6.8 thanks to gwern0

* Various cleanups and reorgs thanks to gwern0

* New wcW (wc word count) in ShellEquivs thanks to gwern0

Homepage: http://software.complete.org/hsh

hslogger 1.0.4
--
Library providing logging infrastructure

* Updated for GHC 6.8 compatibility

* Removed Setup.hs hooks in favor of Cabal configurations

* Thanks to patches from Shae Erisson and Spencer Janssen contributing
  to the GHC 6.8 compatibility.

Homepage: http://software.complete.org/hslogger

LDAP 0.6.3
--
Haskell bindings for LDAP

* GHC 6.8.x compatibility fixes

* Updated homepage in .cabal file

Homepage: http://software.complete.org/ldap-haskell

ListLike 1.0.1
--
Library providing generic operations over list-like types

* GHC 6.8.x compatibility fixes to .cabal file

Homepage: http://software.complete.org/listlike

magic 1.0.7
---
Binding for Magic, the file type identification library

* Updates for GHC 6.8 compatibility

Homepages:
  Hackage: 
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/magic-1.0.7
  Darcs tree: http://darcs.complete.org/magic-haskell

MissingH 1.0.0
--
Library of many useful functions

* Poof, this is 1.0!

* GHC 6.8 compatibility fixes:
  + GHC 6.8 introduced Data.String.  Renamed the MissingH Data.String
to Data.String.Utils
  + doc string tweaks for new haddock
  + cabal and build system updates thanks to Duncan Coutts

* -Wall compatibility improvements throughout.  Patches from gwern0

* Data.List.Utils.uniq efficiency improved thanks to Martin Huschenbett

* New Data.Tuple.Utils from Neil Mitchell

Homepage: http://software.complete.org/missingh

srcinst 0.8.10
--
Tool to install Debian packages using only source, Gentoo-style

* 

Re: [Haskell] Should the following program be accepted by ghc?

2008-01-16 Thread Bruno Oliveira

Hello,

Maybe a more slightly more honest type for decomp would be:) :

decomp :: Injective f = f a :=: f b - a :=: b

Cheers,

Bruno

On Wed, 16 Jan 2008, Derek Elkins wrote:


On Wed, 2008-01-16 at 09:18 +, J.N. Oliveira wrote:

On Jan 16, 2008, at 2:08 AM, Bruno Oliveira wrote:


Hello,

I have been playing with ghc6.8.1 and type families and the
following program is accepted without any type-checking error:


data a :=: b where
   Eq :: a :=: a



decomp :: f a :=: f b - a :=: b
decomp Eq = Eq


However, I find this odd because if you interpret f as a function
and :=: as equality, then this is saying that

if f a = f b then a = b


This is saying that  f  is injective. So perhaps the standard
interpretation leads implicitly to this class of functions.


Just like data constructors, type constructors are injective. f a
doesn't simplify and so essentially you have structural equality of the
type terms thus f a = f b is -equivalent- to a = b.  Obviously type
functions change this, just like normal value functions do at the value
level.



___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Should the following program be accepted by ghc?

2008-01-16 Thread Jonathan Cast

On 16 Jan 2008, at 5:59 PM, Bruno Oliveira wrote:


Hello,

Maybe a more slightly more honest type for decomp would be:) :

decomp :: Injective f = f a :=: f b - a :=: b


Perhaps.  Although you *have* to have an implicit Injective  
constraint on all type constructor variables to pull of Haskell's  
first-order unification trick, so it's not a constraint that will be  
relaxed soon (or, most likely, ever).


jcc

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] Re: Simulating client server communication with recursive monads

2008-01-16 Thread apfelmus

[redirected to haskell-cafe]

Jan Stranik wrote:

Do you know what is the theoretical foundation for having mfix process
side-effects in the lexical order as opposed to execution order?
Could you point me to some papers, if you know of any off top your head? 


  http://www.cse.ogi.edu/pacsoft/projects/rmb/


Regards,
apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Properties of optimizer rule application?

2008-01-16 Thread Henning Thielemann

Reading various papers and the Wiki about GHC optimizer rules I got the
impression that there are not much properties I can rely on and I wonder
how I can write a reliable fusion framework with this constraint.
 I read about the strategy to replace functions early by fusable
implementations and replace them back to fast low-level implementation if
fusion was not possible. However, can I rely on the back-translation if I
have no warranty that the corresponding rule is applied? Is there some
warranty that rules are applied as long as applicable rules are available
or is the optimizer free to decide that it worked enough for today?
 I see several phases with a fixed number of iterations in the output of
-ddump-simpl-iterations. Is there some idea behind these phases or is the
structure and number rather arbitrary? If there is only a fixed number of
simplifier runs, how can I rely on complete fusion of arbitrary large
expressions?
 At some place I read that the order of application of rules is arbitrary.
I like to have some warranty that more special rules are applied before
more general rules. That is, if rule X is applicable whereever Y is
applicable then Y shall be tried before X. This is currently not assured,
right?
 Another text passage tells that the simplification is inside-out
expressions. Such a direction would make the design of rules definitely
easier. Having both directions, maybe alternating in the runs of the
simplifier, would be also nice. I could then design transforms of the
kind:
   toFastStructure . slowA . slowB . slowC . slowWithNoFastCounterpart
   fastA . toFastStructure . slowB . slowC . slowWithNoFastCounterpart
   fastA . fastB . toFastStructure . slowC . slowWithNoFastCounterpart
   fastA . fastB . fastC . toFastStructure . slowWithNoFastCounterpart
   fastA . fastBC . toFastStructure . slowWithNoFastCounterpart
   fastABC . toFastStructure . slowWithNoFastCounterpart

 On the one hand the inner of functions may not be available to fusion, if
the INLINE pragma is omitted. As far as I know inlining may take place
also without the INLINE pragma, but I have no warranty. Can I rely on
functions being inlined with INLINE pragma? Somewhere I read that
functions are not inlined if there is still an applicable rule that uses
the function on the left-hand side. Altogether I'm uncertain how inlining
is interleaved with rule application. It was said, that rules are just
alternative function definitions. In this sense a function definition with
INLINE is a more aggressively used simplifier rule, right?
 On the other hand if I set the INLINE pragma then the inner of the
function is not fused. If this would be the case, I could guide the
optimizer to fuse several sub-expressions before others. Say,
  doubleMap f g = map f . map g
 could be fused to
  doubleMap f g = map (f . g)
 and then this fused version can be fused further in the context of the
caller. The current situation seems to be that {-# INLINE doubleMap #-}
switches off local fusion and allows global fusion, whereas omitting the
INLINE pragma switches on local fusion and disallows global fusion. How
can I have both of them?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] ANN: GLFW-0.3 released

2008-01-16 Thread Duncan Coutts

On Tue, 2008-01-15 at 23:32 -0500, Paul L wrote:
 GLFW is a Haskell module for GLFW OpenGL framework. It provides an
 alternative to GLUT for OpenGL based Haskell programs.
 
 The current 0.3 version is for download from hackageDB at:
 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/GLFW-0.3

Well done on getting another release out. It's available in the Gentoo's
Haskell overlay already :-)

 Same as the previous 0.2 version it requires Cabal 1.2 or later for
 installation (which comes as default in GHC 6.8 or later). The
 installation is now conforming to the standard Cabal steps.
 
 New addition is the Haddock documentation for all interface functions.
 There is also a sample program to demonstrate its usage on the Haskell
 wiki site for GLFW:
 http://haskell.org/haskellwiki/GLFW
 
 Any feedbacks is welcome! I've only tested it on a limited number of
 platforms + GHC combinations, so if you have installation issue,
 please let me know. Thank you!

You bundle all the GLFW source code. Some systems have the C library
already available as a dynamic library. For gentoo for example we would
prefer to use the existing glfw package than statically linking in
another copy. This would be a good application of configurations,
something like:

flag system-glfw
  description: Use the the system GLFW C library.
   Otherwise use the bundled copy.
  default: False

Library
 ...

 if flag(system-glfw)
   extra-libraries: glfw
 else
   ...

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] libmpd-haskell RFC

2008-01-16 Thread Ben Sinclair
Hello all,
  If anybody has already used libmpd-haskell (the darcs repo version)
or would like to look over it I would appreciate their comments.

Thanks,
Ben

http://turing.une.edu.au/~bsinclai/code/libmpd-haskell/


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Properties of optimizer rule application?

2008-01-16 Thread Simon Peyton-Jones
GHC has one main mechanism for controlling the application of rules, namely 
simplifier phases.  You can say apply this rule only after phase N or 
apply this rule only before phase N.  Similarly for INLINE pragmas.  The 
manual describes this in detail.

I urge against relying on top-down or bottom-up guarantees, because they 
are fragile: if you miss a single opportunity to apply rule A, then rule B may 
kick in; but a later inlining or other simplification might make rule A 
applicable.  Phases are the way to go.

That said, GHC has much too rigid a notion of phases at the moment. There are 
precisely 3, namely 2 then 1 then 0, and that does not give enough control.  
Really we should let you give arbitrary names to phases, express constraints (A 
must be before B), and run a constraint solver to map phase names to a linear 
ordering.  The current system is horribly non-modular.

There's scope for an intern project here.

Simon

| -Original Message-
| From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Henning 
Thielemann
| Sent: 16 January 2008 09:57
| To: Haskell Cafe
| Subject: [Haskell-cafe] Properties of optimizer rule application?
|
|
| Reading various papers and the Wiki about GHC optimizer rules I got the
| impression that there are not much properties I can rely on and I wonder
| how I can write a reliable fusion framework with this constraint.
|  I read about the strategy to replace functions early by fusable
| implementations and replace them back to fast low-level implementation if
| fusion was not possible. However, can I rely on the back-translation if I
| have no warranty that the corresponding rule is applied? Is there some
| warranty that rules are applied as long as applicable rules are available
| or is the optimizer free to decide that it worked enough for today?
|  I see several phases with a fixed number of iterations in the output of
| -ddump-simpl-iterations. Is there some idea behind these phases or is the
| structure and number rather arbitrary? If there is only a fixed number of
| simplifier runs, how can I rely on complete fusion of arbitrary large
| expressions?
|  At some place I read that the order of application of rules is arbitrary.
| I like to have some warranty that more special rules are applied before
| more general rules. That is, if rule X is applicable whereever Y is
| applicable then Y shall be tried before X. This is currently not assured,
| right?
|  Another text passage tells that the simplification is inside-out
| expressions. Such a direction would make the design of rules definitely
| easier. Having both directions, maybe alternating in the runs of the
| simplifier, would be also nice. I could then design transforms of the
| kind:
|toFastStructure . slowA . slowB . slowC . slowWithNoFastCounterpart
|fastA . toFastStructure . slowB . slowC . slowWithNoFastCounterpart
|fastA . fastB . toFastStructure . slowC . slowWithNoFastCounterpart
|fastA . fastB . fastC . toFastStructure . slowWithNoFastCounterpart
|fastA . fastBC . toFastStructure . slowWithNoFastCounterpart
|fastABC . toFastStructure . slowWithNoFastCounterpart
|
|  On the one hand the inner of functions may not be available to fusion, if
| the INLINE pragma is omitted. As far as I know inlining may take place
| also without the INLINE pragma, but I have no warranty. Can I rely on
| functions being inlined with INLINE pragma? Somewhere I read that
| functions are not inlined if there is still an applicable rule that uses
| the function on the left-hand side. Altogether I'm uncertain how inlining
| is interleaved with rule application. It was said, that rules are just
| alternative function definitions. In this sense a function definition with
| INLINE is a more aggressively used simplifier rule, right?
|  On the other hand if I set the INLINE pragma then the inner of the
| function is not fused. If this would be the case, I could guide the
| optimizer to fuse several sub-expressions before others. Say,
|   doubleMap f g = map f . map g
|  could be fused to
|   doubleMap f g = map (f . g)
|  and then this fused version can be fused further in the context of the
| caller. The current situation seems to be that {-# INLINE doubleMap #-}
| switches off local fusion and allows global fusion, whereas omitting the
| INLINE pragma switches on local fusion and disallows global fusion. How
| can I have both of them?
| ___
| Haskell-Cafe mailing list
| Haskell-Cafe@haskell.org
| http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Properties of optimizer rule application?

2008-01-16 Thread Roman Leshchinskiy

Henning Thielemann wrote:

Reading various papers and the Wiki about GHC optimizer rules I got the
impression that there are not much properties I can rely on and I wonder
how I can write a reliable fusion framework with this constraint.


That depends on your definition of reliable. You can't have a framework 
which fuses everything that can be fused but then, I don't think that's 
even theoretically possible. You can, however, have a framework which 
does a pretty good job.



 I read about the strategy to replace functions early by fusable
implementations and replace them back to fast low-level implementation if
fusion was not possible. However, can I rely on the back-translation if I
have no warranty that the corresponding rule is applied? Is there some
warranty that rules are applied as long as applicable rules are available
or is the optimizer free to decide that it worked enough for today?
 I see several phases with a fixed number of iterations in the output of
-ddump-simpl-iterations. Is there some idea behind these phases or is the
structure and number rather arbitrary? If there is only a fixed number of
simplifier runs, how can I rely on complete fusion of arbitrary large
expressions?


In general, you can't. You can control the number of simplifier phases 
with -fsimplifier-phases (in the HEAD only) and the number of iterations 
in each phase with -fmax-simplifier-iterations.


That said, there are other things that break fusion (such as code 
getting between two functions you want to fuse). Again, you can only try 
to make your framework good enough; it'll never be perfect.



 At some place I read that the order of application of rules is arbitrary.
I like to have some warranty that more special rules are applied before
more general rules. That is, if rule X is applicable whereever Y is
applicable then Y shall be tried before X. This is currently not assured,
right?


IIRC, ghc tries more specific rules first but that's somewhat 
unreliable. You can make rule X inactive in simplifier phase 2, however. 
Then, only rule Y will be tried in phase 2; both rules will be tried in 
subsequent phases.


I suspect, though, that ordering requirements on rules might indicate a 
problem in the design of the fusion framework. I think they are best 
avoided.



 Another text passage tells that the simplification is inside-out
expressions. Such a direction would make the design of rules definitely
easier. Having both directions, maybe alternating in the runs of the
simplifier, would be also nice. I could then design transforms of the
kind:

   toFastStructure . slowA . slowB . slowC . slowWithNoFastCounterpart
   fastA . toFastStructure . slowB . slowC . slowWithNoFastCounterpart
   fastA . fastB . toFastStructure . slowC . slowWithNoFastCounterpart
   fastA . fastB . fastC . toFastStructure . slowWithNoFastCounterpart
   fastA . fastBC . toFastStructure . slowWithNoFastCounterpart
   fastABC . toFastStructure . slowWithNoFastCounterpart

Again, I don't think you really want to rely on the order of 
simplification. For your example, you only need the following rules:


toFastStructure (slow{A|B|C} x) = fast{A|B|C} (toFastStructure x)
fastB (fastC x) = fastBC x
fastA (fastBC x) = fastABC x

They do not require any specific traversal order.


 On the one hand the inner of functions may not be available to fusion, if
the INLINE pragma is omitted. As far as I know inlining may take place
also without the INLINE pragma, but I have no warranty. Can I rely on
functions being inlined with INLINE pragma?


No. The inliner still uses heuristic to determine if inlining really is 
beneficial. If you want to be sure, use rewrite rules.


 Somewhere I read that

functions are not inlined if there is still an applicable rule that uses
the function on the left-hand side. Altogether I'm uncertain how inlining
is interleaved with rule application. It was said, that rules are just
alternative function definitions. In this sense a function definition with
INLINE is a more aggressively used simplifier rule, right?


No, rules are more aggressive since they are applied unconditionally.


 On the other hand if I set the INLINE pragma then the inner of the
function is not fused. If this would be the case, I could guide the
optimizer to fuse several sub-expressions before others. Say,
  doubleMap f g = map f . map g
 could be fused to
  doubleMap f g = map (f . g)
 and then this fused version can be fused further in the context of the
caller. The current situation seems to be that {-# INLINE doubleMap #-}
switches off local fusion and allows global fusion, whereas omitting the
INLINE pragma switches on local fusion and disallows global fusion. How
can I have both of them?


If you say {-# INLINE doubleMap #-}, you really expect doubleMap to be 
inlined and never to be called explicitly; therefore, you don't really 
care too much what actually happens to it. You can, however, do 
something like:


{-# NOINLINE doubleMap #-}

RE: [Haskell-cafe] Properties of optimizer rule application?

2008-01-16 Thread Henning Thielemann

On Wed, 16 Jan 2008, Simon Peyton-Jones wrote:

 GHC has one main mechanism for controlling the application of rules,
 namely simplifier phases.  You can say apply this rule only after
 phase N or apply this rule only before phase N.  Similarly for INLINE
 pragmas.  The manual describes this in detail.

Indeed. But since it does not mention the number of phases, nor the number
of iterations per phase, nor what actually is performed per iteration,
this appeared to me to be an internal issue of GHC which should not be
relied on.

 I urge against relying on top-down or bottom-up guarantees, because
 they are fragile: if you miss a single opportunity to apply rule A, then
 rule B may kick in; but a later inlining or other simplification might
 make rule A applicable.  Phases are the way to go.

I see.

 That said, GHC has much too rigid a notion of phases at the moment.
 There are precisely 3, namely 2 then 1 then 0, and that does not give enough 
 control.

What about the 'gentle' phase in the dump ?

 Really we should let you give arbitrary names to phases, express
 constraints (A must be before B), and run a constraint solver to map
 phase names to a linear ordering.

Sounds like a topological sort. Reminds me on precedence control of infix
operators.

It seems to me that you have something more sophisticated already in mind.
What you sketch would allow application specific code to defer
optimization rules from the standard libraries. E.g. I could write rules
for lists that are designed for my application and that can be applied
without interference from Data.List. When no more of my rules can be
applied, then Data.List rules can fuse the rest.


It's interesting how to integrate this in the Haskell language. When you
want to state phase A before phase B you may have to refer to phases
defined in other modules. You have to be able to import them from other
modules, and you cannot use the regular 'import' syntax, since phase
identifiers are not part of Haskell language. Maybe you must enclose those
imports in pragmas, too. You need new module dependency checking, since
more dependencies can be introduced when optimization is switched on or
you have to restrict phase import to modules that are imported anyway.

{-# RULES
  import qualified Data.List as List
  #-}

 There's scope for an intern project here.

I could take the opportunity.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Yi editor tutorial

2008-01-16 Thread gwern0
On 2008.01.15 22:54:08 -0800, Benjamin L. Russell [EMAIL PROTECTED] 
scribbled 1.8K characters:
 Your Yi editor tutorial looks like a fascinating idea,
 but I use Mac OS X (10.2.8 Jaguar, soon to be upgraded
 to 10.5.x Leopard) at home, and Windows XP at work,
 while your tutorial is based on Ubuntu and the bash
 shell.

 A few questions:

 1) Do you have any versions of your Yi tutorial for
 Mac OS X or Windows XP; if not, are there any plans
 for such tutorials in the future?

I suspect you would have a hard time running on Windows XP: the cabal file 
currently declares a dependency on 'unix' because the VTY interface needs it, 
and also because the Dired module needs System.Posix.Users (to look up file 
owners). So at the very least you'd need to edit those out.

 2) On your tutorial top page
 (http://nobugs.org/developer/yi/), you mentioned that
 you had first learned Haskell in 2001 from _The
 Haskell School of Expression_ by Paul Hudak.  I also
 tried studying that book, and found it very
 interesting (especially with its focus on multimedia
 examples), but unfortunately got stuck on an exercise
 in Chapter 2 that required trigonometry, which I had
 forgotten from lack of use and didn't have time to
 review.  Also, I wanted to study it online, and had
 purchased the book (and thus paid the licensing fee),
 but was unable to find an online version.  Do you have
 any suggestions for online books with the same flavor
 that require less domain-specific knowledge;
 alternatively, do you have any suggestions for online
 material that precisely covers the domain-specific
 knowledge assumed by that book?

 Benjamin L. Russell

--
gwern
NAVCM Area51 M.P.R.I. Misawa Manfurov CACI Internet rapnel W3 HF


pgprV7qzwi182.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: HStringTemplate -- An Elegant, Functional, Nifty Templating Engine for Haskell

2008-01-16 Thread Bit Connor
On Jan 14, 2008 9:47 AM, Sterling Clover [EMAIL PROTECTED] wrote:
 HStringTemplate is a port of Terrence Parr's lovely StringTemplate
 (http://www.stringtemplate.org) engine to Haskell.

 It is available, cabalized, at:
 darcs get http://code.haskell.org/HStringTemplate/

Template systems have been a crucial missing part of Haskell web
development. I am very happy to hear about this project, and will
definitely be looking at this in the near future!

Thanks,
Bit
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: MonadPrompt + Gtk2Hs = ?

2008-01-16 Thread apfelmus

Felipe Lessa wrote:

apfelmus wrote:

The type of  contPromptM  is even more general than that:

   casePromptOf' :: (r - f b)
 - (forall a,b. p a - (a - f b) - f b)
 - Prompt p r - f b
   casePromptOf' done cont (PromptDone r) = done r
   casePromptOf' done cont (Prompt p c  ) = cont p (casePromptOf' done cont . c)


(I guess the forall b inside 'cont' is a typo?)


No, it's intentional and not less general than


casePromptOf :: (r - b)
 - (forall a. p a - (a - b) - b)
 - Prompt p r - b
casePromptOf done cont (PromptDone r) = done r
casePromptOf done cont (Prompt p c  ) = cont p (casePromptOf done cont . c)


since we can use

   data Const c b = Const { unConst :: c }

and set  f = (Const b)  yielding

   casePromptOf :: forall p,c. (r - c)
- (forall a. p a - (a - c) - c)
- Prompt p r - c
   casePromptOf return bind =
  unConst . casePromptOf' (Const . return) bind'
  where
  bind' :: forall a,b. p a - (a - Const c b) - Const c b
  bind' p c = Const $ bind p (unConst . c)

In other words,  casePromptOf  can be defined with  casePromptOf'  and a 
clever choice of  f  .



And, just for the record,

runPromptAgain :: Monad m = (forall a. p a - m a) - Prompt p r - m r
runPromptAgain f = casePromptOf return ((=) . f)


I thought that  casePromptOf  would not be general enough to write this 
very definition


  runPromptAgain' f = casePromptOf' return ((=) . f)

that's why I used a type constructor  f b  instead, with  f = m  the 
monad in mind. The difference is basically that the  (=)  in 
runPromptAgain'  is expected to be polymorphic


  (=) :: forall b. m a - (a - m b) - m b

whereas the  (=)  in  runPromptAgain  is specialized to the final type 
 m r  of  runPromptAgain  , i.e.


  (=) :: m a - (a - m r) - m r


Unfortunately, I failed to realize that  casePromptOf  is in turn not 
less general than  casePromptOf'  rendering my approach pretty useless 
:) I mean, if the second argument in


   casePromptOf' :: (r - f c)
 - (forall a,b. p a - (a - f b) - f b)
 - Prompt p r - f c

is polymorphic, we can certainly plug it into

   casePromptOf  :: (r - f c)
 - (forall a. p a - (a - f c) - f c)
 - Prompt p r - f c

and thus define  casePromptOf'  in terms of  casePromptOf :

   casePromptOf' return bind = casePromptOf return bind



The above equivalence of a type constructor  f  and a simple type  c  in 
certain cases applies to the continuation monad, too. I mean that


   ContT r m a   is equivalent to   Cont (m r) a

and even

   ContT' m ais equivalent to   forall r. Cont (m r) a

for the more type safe version

   data ContT' m a = ContT' (forall r. (a - m r) - m r)

So, it's pretty clear that  ContT  isn't really a monad transformer 
since  m  doesn't need to be a monad at all. Put differently, the 
Control.Monad.Cont  module needs some cleanup since type synonyms


   type ContT r m a = Cont (m r) a
   type ContT' m a  = forall r. Cont (m r) a

(or newtypes for type classery) are enough.


Regards,
apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] HPC of several modules?

2008-01-16 Thread Magnus Therning
How do I get reports on coverage of all modules in a program?

The documentation I've found http://blog.unsafeperformio.com/?p=18 and
http://www.haskell.org/ghc/docs/latest/html/users_guide/hpc.html both do
coverage of a single module.  Going the naive route of first making sure
there are no compiled modules in my source tree (i.e. removing all .o and
.hi files) then running 'ghc --make -fhpc MyTool.hs' succeeds in building
the program, and I get a MyTool.tix and a .mix file in .hpc/ for each module
after running it, but how do I get 'hpc' to produce reports containing more
than just Main?

'hpc6 markup MyTool' includes only Main
'hpc6 markup MyTool Main My.Module' includes only Main
'hpc6 markup MyTool My.Module' results in an error:
Writing: hpc_index.html
hpc6: Prelude.foldr1: empty list

None of the arguments shown by 'hpc6 help markup' stands out as a clear
candidate either...

'hpc6 report --per-module MyTool' generates this:

-module Main-
 80% expressions used (386/479)
100% boolean coverage (0/0)
 100% guards (0/0)
 100% 'if' conditions (0/0)
 100% qualifiers (0/0)
100% alternatives used (0/0)
100% local declarations used (0/0)
100% top-level declarations used (17/17)

Where are my other modules???

Any and all help is appreciated.

/M
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HPC of several modules?

2008-01-16 Thread Neil Mitchell
Hi

 and .hi files) then running 'ghc --make -fhpc MyTool.hs' succeeds in

That's all I do.

 'hpc6 markup MyTool' includes only Main

I do:

hpc markup MyTool.tix

Then it all Just Works (TM). What is hpc6? I am using the version
supplied with GHC 6.8.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Yi editor tutorial

2008-01-16 Thread Jules Bean

First of all, Andrew: Thanks! That was really interesting.


Benjamin L. Russell wrote:

Your Yi editor tutorial looks like a fascinating idea,
but I use Mac OS X (10.2.8 Jaguar, soon to be upgraded
to 10.5.x Leopard) at home, and Windows XP at work,
while your tutorial is based on Ubuntu and the bash
shell.

A few questions:

1) Do you have any versions of your Yi tutorial for
Mac OS X or Windows XP; if not, are there any plans
for such tutorials in the future?



Didn't look to me very ubuntu specific. I bet it would work out very 
similarly on OS X.


WinXP is something else, as gwern observed

J

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Poisx select support

2008-01-16 Thread Galchin Vasili
Hello,

   In the ghc libraries directory I can't find the Haskell .hs/.lhsthat
implements Posix select. ?? I found Select.c.

Regards, Vasili
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Poisx select support

2008-01-16 Thread Don Stewart
vigalchin:
Hello,

   In the ghc libraries directory I can't find the Haskell
.hs/.lhsthat implements Posix select. ?? I found Select.c.

In Control.Concurrent

forkIO
threadDelay
threadWaitRead
threadWaitWrite

The thread primitives are implemented in terms of select, and give you a
cleaner interface.

Also, with Control.Concurrent.STM.

atomically
orElse
retry

You can have threads wait on one of a series of alternative events.
Using STM, you'll be able to compose blocks of such code, which you 
can't do with select.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Poisx select support

2008-01-16 Thread Galchin Vasili
Hi Don,

 Sorry ..I  wasn't clear enough.I am trying to determine from the
Haskell FFI doc what datatype to use in order to model C's void *, e.g.
for mmap http://www.opengroup.org/onlinepubs/95399/functions/mmap.html

Regards, Vasili



On 1/16/08, Don Stewart [EMAIL PROTECTED] wrote:

 vigalchin:
 Hello,
 
In the ghc libraries directory I can't find the Haskell
 .hs/.lhsthat implements Posix select. ?? I found Select.c.

 In Control.Concurrent

forkIO
threadDelay
threadWaitRead
threadWaitWrite

 The thread primitives are implemented in terms of select, and give you a
 cleaner interface.

 Also, with Control.Concurrent.STM.

atomically
orElse
retry

 You can have threads wait on one of a series of alternative events.
 Using STM, you'll be able to compose blocks of such code, which you
 can't do with select.

 -- Don

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Poisx select support

2008-01-16 Thread Don Stewart
vigalchin:
Hi Don,

 Sorry ..I  wasn't clear enough.I am trying to determine from the
Haskell FFI doc what datatype to use in order to model C's void *, e.g.
for mmap
[1]http://www.opengroup.org/onlinepubs/95399/functions/mmap.html

Regards, Vasili

In the System.IO.Posix.MMap module, mmap is imported as:

foreign import ccall unsafe hs_bytestring_mmap.h hs_bytestring_mmap
c_mmap   :: CSize - CInt - IO (Ptr Word8)

foreign import ccall unsafe hs_bytestring_mmap.h munmap
c_munmap :: Ptr Word8 - CSize - IO CInt
 
You can see the full binding to mmap here:

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/bytestring-mmap

Cheers,
  Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: Haddock version 2.0.0.0

2008-01-16 Thread Henning Thielemann

On Tue, 8 Jan 2008, David Waern wrote:

 Changes in version 2.0.0.0:

   * The GHC API is used as the front-end

It's great to see this progress in Haddock. However, is Haddock now more
difficult to port than before? Is there some bug- and feature request
tracker for Haddock? I only know of
  http://www.haskell.org/haskellwiki/Haddock/Development_ideas
 and the first big point seems to be finished now.

 I like to have the following enhancements:

 * Optionally show qualifications of identifiers, that is print
Sequence.map rather than map, Music.T rather than just T. The
option for haddock could be
 --qualification QUAL
   QUAL=none   (default) strip off qualification (just map)
   QUAL=orig   show the identifiers as they are written in the module 
(e.g. map or List.map)
   QUAL=full   show all identifiers with full qualification 
(Data.List.map)
   Actually I tried to implement it by myself in the old Haddock, but I
could not precisely identify the place, where the qualification is
removed.

 * Documentation of arguments of type constructors other than 'top level' 
arrows. E.g.
T (a {- ^ arg -}  -  b {- ^ result -} )
(a {- ^ arg -}  -  b {- ^ result -} ) - c
(a {- ^ x coord -}, b {- ^ y coord -}) - c
   It's probably difficult to format properly in HTML.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Poisx select support

2008-01-16 Thread Galchin Vasili
Hi Don,

 I am looking at the code for ghc-6.8.2 but don't see the mmap support.
Is this newly wriiten by you? I would also like to help round out the Posix
functionality in Haskell. Is there an accurate list of what needs to be done
given the fact that maybe some work is in progress but not checked in?

Thank you, Vasili


On 1/16/08, Don Stewart [EMAIL PROTECTED] wrote:

 vigalchin:
 Hi Don,
 
  Sorry ..I  wasn't clear enough.I am trying to determine from the
 Haskell FFI doc what datatype to use in order to model C's void *,
 e.g.
 for mmap
 [1]http://www.opengroup.org/onlinepubs/95399/functions/mmap.html
 
 Regards, Vasili

 In the System.IO.Posix.MMap module, mmap is imported as:

foreign import ccall unsafe hs_bytestring_mmap.h hs_bytestring_mmap
c_mmap   :: CSize - CInt - IO (Ptr Word8)

foreign import ccall unsafe hs_bytestring_mmap.h munmap
c_munmap :: Ptr Word8 - CSize - IO CInt

 You can see the full binding to mmap here:


 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/bytestring-mmap

 Cheers,
 Don

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Poisx select support

2008-01-16 Thread Don Stewart
vigalchin:
Hi Don,

 I am looking at the code for ghc-6.8.2 but don't see the mmap
support. Is this newly wriiten by you? I would also like to help round out
the Posix functionality in Haskell. Is there an accurate list of what
needs to be done given the fact that maybe some work is in progress but
not checked in?

Thank you, Vasili

Code isn't generally checked into ghc 6.8.2, or the base libraries.
Instead, new projects are distributed via hackage.haskell.org.  It is
like CPAN for Haskell, if you're familiar with CPAN.

The mmap bytestring package is available there, for example. 

For improving POSIX support in general, careful patches to the 'unix'
library would be the best way:

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/unix

If something you need is missing from there, write it as a patch against 
the darcs repository for `unix',
http://darcs.haskell.org/packages/unix/, and submit it to
[EMAIL PROTECTED] for inclusion in the next release of that library.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Data constructors versus types

2008-01-16 Thread Peter Verswyvelen
I know nothing about theoretical computer science, but I was wondering
if it possible to forget about types, and just keep the concept of data
constructors, and have an analyzer determine correctness of the code and
staticness of the data?

Basically this is what SCHEME does no? Doesn't SCHEME have static whole
program analyzers to remove the overhead of the symbol tags and check
correctness of a program (Stalin, Petit-Scheme, ...)?

What are to pros/contras?

Thank you,
Peter









___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data constructors versus types

2008-01-16 Thread Don Stewart
bf3:
 I know nothing about theoretical computer science, but I was wondering
 if it possible to forget about types, and just keep the concept of data
 constructors, and have an analyzer determine correctness of the code and
 staticness of the data?

The analysis would be type inference and checking.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Properties of optimizer rule application?

2008-01-16 Thread Henning Thielemann

On Wed, 16 Jan 2008, Roman Leshchinskiy wrote:

 Henning Thielemann wrote:
  Reading various papers and the Wiki about GHC optimizer rules I got the
  impression that there are not much properties I can rely on and I wonder
  how I can write a reliable fusion framework with this constraint.

 That depends on your definition of reliable. You can't have a framework
 which fuses everything that can be fused but then, I don't think that's
 even theoretically possible.

At least I expect that it fuses greedily and does not stop as long as
rules are applicable. Thinking of intermediate fusable function
replacements, I can be sure that rules are invoked that prevent me from
making things worse by optimization attempts.

   I read about the strategy to replace functions early by fusable
  implementations and replace them back to fast low-level implementation if
  fusion was not possible. However, can I rely on the back-translation if I
  have no warranty that the corresponding rule is applied? Is there some
  warranty that rules are applied as long as applicable rules are available
  or is the optimizer free to decide that it worked enough for today?
   I see several phases with a fixed number of iterations in the output of
  -ddump-simpl-iterations. Is there some idea behind these phases or is the
  structure and number rather arbitrary? If there is only a fixed number of
  simplifier runs, how can I rely on complete fusion of arbitrary large
  expressions?

 In general, you can't.

To give a precise example: If I have a sequence of 'map's
  map f0 . map f1 . ... . map fn
 then there is some length where this is no longer collapsed to a single
'map'? However then I wonder, how it is possible to make the compiler to
go into an infinite loop by the rule

   loop   forall x,y.  f x y = f y x

 as stated in the GHC manual:
   http://haskell.org/ghc/docs/latest/html/users_guide/rewrite-rules.html

 I'm still uncertain how much is done in one iteration in one phase, since
there seems to be several rules that can fire in one iteration.


 You can control the number of simplifier phases with -fsimplifier-phases
 (in the HEAD only) and the number of iterations in each phase with
 -fmax-simplifier-iterations.

Good to know.

 That said, there are other things that break fusion (such as code
 getting between two functions you want to fuse). Again, you can only try
 to make your framework good enough; it'll never be perfect.

It would be nice to have a flag which alters the rule application order of
the compiler randomly in order to see whether the fusion framework
implicitly relies on a particular behaviour of the current compiler
version.

   Another text passage tells that the simplification is inside-out
  expressions. Such a direction would make the design of rules definitely
  easier. Having both directions, maybe alternating in the runs of the
  simplifier, would be also nice. I could then design transforms of the
  kind:
 toFastStructure . slowA . slowB . slowC . slowWithNoFastCounterpart
 fastA . toFastStructure . slowB . slowC . slowWithNoFastCounterpart
 fastA . fastB . toFastStructure . slowC . slowWithNoFastCounterpart
 fastA . fastB . fastC . toFastStructure . slowWithNoFastCounterpart
 fastA . fastBC . toFastStructure . slowWithNoFastCounterpart
 fastABC . toFastStructure . slowWithNoFastCounterpart

 Again, I don't think you really want to rely on the order of
 simplification. For your example, you only need the following rules:

 toFastStructure (slow{A|B|C} x) = fast{A|B|C} (toFastStructure x)
 fastB (fastC x) = fastBC x
 fastA (fastBC x) = fastABC x

 They do not require any specific traversal order.

Ok, this was a bad example. Try this one:
   project . project . foo
 with the rules
   project (project x) = project x
   project (foo x) = projectFoo x

Both rules can be applied to the expression, but you get one fusion more,
if you use the first one first. Let me guess, in order to solve that, I
should restrict the first rule to an earlier phase than the second rule.




Thanks for the detailed answer and thanks to the busy people who have
created the optimizer and who have written all the papers and Wiki pages
for making use of this feature. I don't know another language where it is
possible to control the optimizer in this way.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Poisx select support

2008-01-16 Thread Ian Lynagh
On Wed, Jan 16, 2008 at 12:40:22PM -0800, Donald Bruce Stewart wrote:
 
 If something you need is missing from there, write it as a patch against 
 the darcs repository for `unix',
 http://darcs.haskell.org/packages/unix/, and submit it to
 [EMAIL PROTECTED] for inclusion in the next release of that library.

Please note that patches for the unix library should follow the library
submissions process:

http://www.haskell.org/haskellwiki/Library_submissions


Thanks
Ian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [darcs-devel] announcing darcs 2.0.0pre2

2008-01-16 Thread David Roundy
On Thu, Jan 03, 2008 at 11:11:40AM +, Simon Marlow wrote:
  Anyhow, could you retry this test with the above change in methodology,
  and let me know if (a) the pull is still slow the first time and (b) if
  it's much faster the second time (after the reverse unpull/pull)?
 
 I think I've done it in both directions now, and it got faster, but still 
 much slower than darcs1:
 
 $ time darcs2 unpull --from-tag 2007-09-25 -a
 Finished unpulling.
 58.68s real   50.64s user   6.36s system   97% darcs2 unpull --from-tag 
 2007-09-25 -a
 $ time darcs2 pull -a ../ghc-darcs2
 Pulling from ../ghc-darcs2...
 Finished pulling and applying.
 53.28s real   44.62s user   7.10s system   97% darcs2 pull -a ../ghc-darcs2
 
 This is still an order of magnitude slower than darcs1 for the same 
 operation.  (these times are now on the local filesystem, BTW)

I've recently found the problem leading to this slowdown (I believe) and
get about an order-of-magnitude improvement in the speed of a pull of 400
patches in the ghc repository.  It turned out to be an issue that scaled
with the size (width) of the repository, not with the number of patches
(which had been the obvious suspect), which was causing trouble when
applying to the pristine cache.

At this point, darcs-2 outperforms darcs-1 on most tests that I've tried,
so it'd be a good time to find some more performance problems, if you
can... and I don't doubt that there are more out there.
-- 
David Roundy
Department of Physics
Oregon State University
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Data constructors versus types

2008-01-16 Thread Achim Schneider
Peter Verswyvelen [EMAIL PROTECTED] wrote:

 I know nothing about theoretical computer science, but I was wondering
 if it possible to forget about types, and just keep the concept of
 data constructors, and have an analyzer determine correctness of the
 code and staticness of the data?
 
 Basically this is what SCHEME does no? Doesn't SCHEME have static
 whole program analyzers to remove the overhead of the symbol tags
 and check correctness of a program (Stalin, Petit-Scheme, ...)?
 
 What are to pros/contras?
 
Basically, it's a matter of taste, and how much of the checking can be
done at compile-time... which gets quite involved and O(big), if all
you have is (tagged) lists with type information.

And, yes, Stalin manages to specialize a - a functions to Int - Int
to make numerical code as fast or faster than C, but so does GHC.

Plus a GHC build may allow you to get a coffee, Stalin allows you to go
shopping, watch a movie and then go on vacation.

That is because, in general, you can't forget about the type of your
data, you need it in some way or the other to do anything with it.

-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Data constructors versus types

2008-01-16 Thread Achim Schneider
Achim Schneider [EMAIL PROTECTED] wrote:

 And, yes, Stalin manages to specialize a - a functions to Int - Int
 to make numerical code as fast or faster than C, but so does GHC.
 
That is, seen formally, quite fuzzy. I'm going to be beaten for it.

-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HPC of several modules?

2008-01-16 Thread Christopher Lane Hinson



What is hpc6? I am using the version supplied with GHC 6.8.


This is just hpc on debian/ubuntu systems, where all the binaries have 
symlinks that append a version number.


ghc6.8 on debian doesn't provide hpc without the 6.  I just reportbugged 
this.


--Lane
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Data constructors versus types

2008-01-16 Thread Peter Verswyvelen
Thank you for explaining.

I was wondering if the same syntax could be used somehow (not in
Haskell, in some theoretical language), I mean use an annotation to tell
the compiler that a type-tag should be determined at compile time and
not at runtime, otherwise - error

So eg

// Runtime tag, aka data constructor
foo (Int n) = ...

// Compile tag, aka type
foo (Int n) = ...

Might not make any sense...

You're talking about O(big)... But wasn't the C language in some way
succesful because on the hardware at that time other much nicer
languages (e.g. LISP) were just way too slow? Or was this just O(n)
times slower? 

IMHO: Shouldn't concepts that are conceptually the same (in this case,
giving meaning/adding constraints to bits of data ) at runtime and
compile time look very similar in the language? Most languages require
completely different syntax and code when you want something to be lazy
versus strict. Haskell doesn't, you can just add an annotation if you
want it to be strict, no much rewriting is required. However, if I want
to change a runtime data constructor definition (and code) into a
compiletime type, then I can rewrite all of my code basically. That is
not the case in SCHEME as far as I understand it.



On Wed, 2008-01-16 at 22:20 +0100, Achim Schneider wrote:
 Peter Verswyvelen [EMAIL PROTECTED] wrote:
 
  I know nothing about theoretical computer science, but I was wondering
  if it possible to forget about types, and just keep the concept of
  data constructors, and have an analyzer determine correctness of the
  code and staticness of the data?
  
  Basically this is what SCHEME does no? Doesn't SCHEME have static
  whole program analyzers to remove the overhead of the symbol tags
  and check correctness of a program (Stalin, Petit-Scheme, ...)?
  
  What are to pros/contras?
  
 Basically, it's a matter of taste, and how much of the checking can be
 done at compile-time... which gets quite involved and O(big), if all
 you have is (tagged) lists with type information.
 
 And, yes, Stalin manages to specialize a - a functions to Int - Int
 to make numerical code as fast or faster than C, but so does GHC.
 
 Plus a GHC build may allow you to get a coffee, Stalin allows you to go
 shopping, watch a movie and then go on vacation.
 
 That is because, in general, you can't forget about the type of your
 data, you need it in some way or the other to do anything with it.
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Compiling Blobs

2008-01-16 Thread Peter Verswyvelen
I'm trying to build http://www.cs.york.ac.uk/fp/darcs/Blobs using GHC
6.8.2. It looks like a good Haskell program to learn from.

So far I managed to modify the source code so it makes use of the new
HaXML libraries, and after a lot of hacking I could build and link to
wxHaskell, but my app crashes (I do get a window however, woohoo)

Maybe someone else managed already?

Thanks,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: ANN: A triple of new packages for talking tothe outside world

2008-01-16 Thread Dominic Steinitz
Adam Langley agl at imperialviolet.org writes:

 
 On Jan 10, 2008 10:45 AM, Don Stewart dons at galois.com wrote:
  That's pretty much what we envisaged as the approach to take.
  Monad transformers adding some bit-buffer state over Get/Put.
 
 For anyone who's still reading this thread...
 
 I've just uploaded[1] binary-strict 0.2.1 which includes
 Data.Binary.Strict.BitGet - a Get like monad which works by the bit.
 I'm afraid that Haddock 2 is choaking on {-# UNPACK #-}, so I don't
 have the HTML documentation to point to. (And I thought that Haddock 2
 was going to fix all those parsing issues :( - hopefully I'm just
 doing something stupid).
 
 [1] 
 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/binary-strict-0.2.1
 
 AGL
 

Thanks for taking the time on this.

The old NewBinary had

NewBinary.Binary.getBits ::
  NewBinary.Binary.BinHandle - Int - IO GHC.Word.Word8

which allowed you to do things like

tlv_ bin =
   do tagValueVal - getBits bin 5
  tagConstructionVal - getBits bin 1
  tagTypeVal - getBits bin 2

I'm sure I'm wrong but putting bits into [Bool] doesn't look very efficient. Of
course, NewBinary didn't address what happened for n = 8. Some possibilities
are a) not allowing more than 8 b) returning [Word8] or c) (which I thought was
where we'd go) a ByteString with some sort of padding.

Dominic.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: Haddock version 2.0.0.0

2008-01-16 Thread David Waern
2008/1/16, Henning Thielemann [EMAIL PROTECTED]:

 On Tue, 8 Jan 2008, David Waern wrote:

  Changes in version 2.0.0.0:
 
* The GHC API is used as the front-end

 It's great to see this progress in Haddock. However, is Haddock now more
 difficult to port than before?

Haddock is already ported to the GHC API, the wiki-page needs updating.

 Is there some bug- and feature request
 tracker for Haddock? I only know of
   http://www.haskell.org/haskellwiki/Haddock/Development_ideas
  and the first big point seems to be finished now.

There is no bug-tracker yet. When community.haskell.org provides Trac,
we might use that. For now, we're using the TODO file in the darcs
repo (code.haskell.org/haddock).

  I like to have the following enhancements:

  * Optionally show qualifications of identifiers, that is print
 Sequence.map rather than map, Music.T rather than just T. The
 option for haddock could be
  --qualification QUAL
QUAL=none   (default) strip off qualification (just map)
QUAL=orig   show the identifiers as they are written in the module 
 (e.g. map or List.map)
QUAL=full   show all identifiers with full qualification 
 (Data.List.map)
Actually I tried to implement it by myself in the old Haddock, but I
 could not precisely identify the place, where the qualification is
 removed.

  * Documentation of arguments of type constructors other than 'top level' 
 arrows. E.g.
 T (a {- ^ arg -}  -  b {- ^ result -} )
 (a {- ^ arg -}  -  b {- ^ result -} ) - c
 (a {- ^ x coord -}, b {- ^ y coord -}) - c
It's probably difficult to format properly in HTML.


I've added these to the TODO file.

Thanks,
David
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Data constructors versus types

2008-01-16 Thread Achim Schneider
Peter Verswyvelen [EMAIL PROTECTED] wrote:

 Thank you for explaining.
 
 I was wondering if the same syntax could be used somehow (not in
 Haskell, in some theoretical language), I mean use an annotation to
 tell the compiler that a type-tag should be determined at compile
 time and not at runtime, otherwise - error
 
 So eg
 
 // Runtime tag, aka data constructor
 foo (Int n) = ...
 
 // Compile tag, aka type
 foo (Int n) = ...
 
 Might not make any sense...
 
ghc --ddump-simpl and assure that your values get unboxed...

 You're talking about O(big)... But wasn't the C language in some way
 succesful because on the hardware at that time other much nicer
 languages (e.g. LISP) were just way too slow? Or was this just O(n)
 times slower? 
 
Compiler technology also wasn't as advanced as now, Stalin can't
compile even small programs under say 5 minutes... compare this to e.g.
TurboPascal, which afair uses three passes: Parsing, Error Reporting,
Code Generation, it was similar with C compilers back then.

Lisp was fast on lisp machines, where it is the same as what C is to
Neumann-Architectures: An assembler.

I'm not at all sure about the specific O's involved, but I guess it's
quite easy to get to NP-complete if you want to do really much without
much information.


 IMHO: Shouldn't concepts that are conceptually the same (in this case,
 giving meaning/adding constraints to bits of data ) at runtime and
 compile time look very similar in the language? Most languages require
 completely different syntax and code when you want something to be
 lazy versus strict. Haskell doesn't, you can just add an annotation
 if you want it to be strict, no much rewriting is required. However,
 if I want to change a runtime data constructor definition (and code)
 into a compiletime type, then I can rewrite all of my code basically.
 That is not the case in SCHEME as far as I understand it.
 
Scheme knows no types but the builtins like INT or PAIR or LIST or
SYMBOL and so on. Even if you distinguish say

('complex 1 2)
from
('vec3 1 2 3)

, the compiler in general won't stop you from passing these things into
the wrong functions. It doesn't even know that a function is passed a
LIST (QUOTEDSYMBOL INT INT) or LIST (QUOTEDSYMBOL INT INT INT), it
just sees a pair, in both cases.

Lisp is actually not really meant to be compiled, but interpreted. The
nice thing is that it doesn't need more than a handful of primitives, a
list parser and heap manager/garbage collector and evaluator, which all
can be implemented in under 1000 lines of C. Things get more involved
with get/cc, but then how many C programmers ever heard of setjmp...

on top of my head, one set of possible primitives is

quote lambda set! succ pred cond.

you can then start by defining define by

(set! 'define (lambda (sym f) (set! sym f)))

There's the wizard book and Graham's Ansi Common Lisp if you're
interested in how cheap lisp actually is.

-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Properties of optimizer rule application?

2008-01-16 Thread Roman Leshchinskiy

Henning Thielemann wrote:


To give a precise example: If I have a sequence of 'map's
  map f0 . map f1 . ... . map fn
 then there is some length where this is no longer collapsed to a single
'map'? 


No. After applying a rule, the simplifier optimises the result of the 
rewriting. This means that with (map f (map g x) = map (f . g) x),


  map f (map g (map h xs))

is first rewritten to

  map (f . g) (map h xs)

and the immediately to

  map (f . g . h) xs

Rewriting does not shift the focus of the simplifier.


   project . project . foo
 with the rules
   project (project x) = project x
   project (foo x) = projectFoo x

Both rules can be applied to the expression, but you get one fusion more,
if you use the first one first. Let me guess, in order to solve that, I
should restrict the first rule to an earlier phase than the second rule.


That's one possibility. It would be vastly preferable, however, to add 
the rule


  project (projectFoo x) = projectFoo x

In general, you want your rewrite system to be confluent. I suspect that 
 non-confluence always indicates a design problem. This is within one 
set of rules, of course - explicitly staged things like rewriting back 
of stuff which didn't fuse are different.


Roman

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Data constructors versus types

2008-01-16 Thread jerzy . karczmarczuk
Achim Schneider writes: 


Lisp is actually not really meant to be compiled, but interpreted. The
nice thing is that it doesn't need more than a handful of primitives, a
list parser and heap manager/garbage collector and evaluator, which all
can be implemented in under 1000 lines of C. Things get more involved
with get/cc, but then how many C programmers ever heard of setjmp...


Would you mind stopping to spread dubious truths?
Certainly, Lisp processors started with simple eval/apply interpreters,
since they were easy to construct, but compilers, their name is Legion! 


Look at CMU Common Lisp compiler.
GNU CLISP compiler
Lisp Works compiler
Allegro compiler
... 


There are also Lisp-C translators. The result is of course compiled.
CLiCC, this is a German (Kiel) product. Perhaps not so far from you. 


Where did you read that Lisp is not meant to be compiled, for goodness'
sake!? 



Jerzy Karczmarczuk 



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell-Support in Ohloh

2008-01-16 Thread Joachim Breitner
Hi,

I’m only margianlly involved or up to date there, but still some might
have missed this:

Ohloh has begun to release their tools, starting with ohcount, their
tool to measure code and comments lines:

http://labs.ohloh.net/ohcount/

They explicitly write that they want haskell support, and the oldest
open bug report on their page is about this:

http://labs.ohloh.net/ohcount/ticket/205

So if anyone feels like programming some ruby (I guess they want it to
be in that language as well) and wants to give the haskell community a
chance for wider audience, give it a shot.

Greetings,
Joachim

-- 
Joachim nomeata Breitner
  mail: [EMAIL PROTECTED] | ICQ# 74513189 | GPG-Key: 4743206C
  JID: [EMAIL PROTECTED] | http://www.joachim-breitner.de/
  Debian Developer: [EMAIL PROTECTED]


signature.asc
Description: Dies ist ein digital signierter Nachrichtenteil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell-Support in Ohloh

2008-01-16 Thread Don Stewart
mail:
 Hi,
 
 I’m only margianlly involved or up to date there, but still some might
 have missed this:
 
 Ohloh has begun to release their tools, starting with ohcount, their
 tool to measure code and comments lines:
 
 http://labs.ohloh.net/ohcount/
 
 They explicitly write that they want haskell support, and the oldest
 open bug report on their page is about this:
 
 http://labs.ohloh.net/ohcount/ticket/205
 
 So if anyone feels like programming some ruby (I guess they want it to
 be in that language as well) and wants to give the haskell community a
 chance for wider audience, give it a shot.
 

Oh, great! I've been waiting for this. It's annoying having xmonad 
classified as a C/C++ project (and sjanssen and I as C/C++ developers!)

http://www.ohloh.net/projects/6869?p=xmonad

Ohloh Summary

* Mostly written in C/C++
* Extremely well-commented source code

So what we need is a) darcs support (so we don't have to convert repos
to git), and b) a haskell lexer. Then all the haskell projects can get 
analysed, linked to , etc, -- and made more visible to the general open
source world.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: HStringTemplate -- An Elegant, Functional, Nifty Templating Engine for Haskell

2008-01-16 Thread Graham Fawcett
On Jan 14, 2008 2:47 AM, Sterling Clover [EMAIL PROTECTED] wrote:
 HStringTemplate is a port of Terrence Parr's lovely StringTemplate
 (http://www.stringtemplate.org) engine to Haskell.

This is very cool.

Your docs describe a function, cacheSTGroup:

cacheSTGroup :: Int - STGen a - STGen a
Given an integral amount of seconds and a group, returns a group
cached for that span of time. Does not cache misses.

How does this work without breaking referential transparency?
Shouldn't it be in the IO monad if it is time-dependent?

Graham
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Data constructors versus types

2008-01-16 Thread Brandon S. Allbery KF8NH


On Jan 16, 2008, at 18:58 , [EMAIL PROTECTED] wrote:


Achim Schneider writes:
Lisp is actually not really meant to be compiled, but interpreted.  
The



Would you mind stopping to spread dubious truths?
Certainly, Lisp processors started with simple eval/apply  
interpreters,
since they were easy to construct, but compilers, their name is  
Legion!


He is correct given that he almost certainly means was not  
originally meant to be compiled --- and please, spare us the obvious  
pedantry.


Also, you might want to take a close look at your public persona as  
exposed on this list.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANN: A triple of new packages for talking tothe outside world

2008-01-16 Thread Adam Langley
On Jan 16, 2008 2:41 PM, Dominic Steinitz
[EMAIL PROTECTED] wrote:
 tlv_ bin =
do tagValueVal - getBits bin 5
   tagConstructionVal - getBits bin 1
   tagTypeVal - getBits bin 2

 I'm sure I'm wrong but putting bits into [Bool] doesn't look very efficient. 
 Of
 course, NewBinary didn't address what happened for n = 8. Some possibilities
 are a) not allowing more than 8 b) returning [Word8] or c) (which I thought 
 was
 where we'd go) a ByteString with some sort of padding.

BitGet is just an API RFC at the moment, so I'm just describing it
here - not trying to justify it.

In BitGet there's getAsWord[8|16|32|64] which take a number of bits ($n$) and
returns the next $n$ bits in the bottom of a Word$x$. Thus, getAsWord8 is what
you call getBits and, if you had a 48 bit number, you could use getAsWord64 and
the bottom 48-bits of the resulting Word64 would be what you want.

Equally, asking for more than $x$ bits when calling getAsWord$x$ is a mistake,
however I don't check for it in the interest of speed.

There are also get[Left|Right]ByteString which return the next $n$ bits in a
ByteString of Word8's. The padding is either at the end of the last byte (left
aligned) or at the beginning of the first byte (right aligned).

If you did want a [Bool], you could use:
  bits - sequence $ take n $ repeat getBit


AGL

--
Adam Langley  [EMAIL PROTECTED]
http://www.imperialviolet.org   650-283-9641
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Data constructors versus types

2008-01-16 Thread gwern0
On 2008.01.17 00:58:19 +0100, [EMAIL PROTECTED] scribbled 0.9K characters:
 Achim Schneider writes:
 Lisp is actually not really meant to be compiled, but interpreted. The
 nice thing is that it doesn't need more than a handful of primitives, a
 list parser and heap manager/garbage collector and evaluator, which all
 can be implemented in under 1000 lines of C. Things get more involved
 with get/cc, but then how many C programmers ever heard of setjmp...

 Would you mind stopping to spread dubious truths?
 Certainly, Lisp processors started with simple eval/apply interpreters,
 since they were easy to construct, but compilers, their name is Legion!
 Look at CMU Common Lisp compiler.
 GNU CLISP compiler
 Lisp Works compiler
 Allegro compiler
 ...
...
 Jerzy Karczmarczuk

I don't think it's a dubious truth. Apparently a lot of Lisps (like Maclisp or 
Interlisp, I hear) had a situation where the semantics of a program could 
differ depending on whether it was compiled or interpreted, and Scheme and 
Common Lisp made a point of trying to avoid that.

In _Introduction to Common Lisp_, we read:
 Most Lisp implementations are internally inconsistent in that by default the 
interpreter and compiler may assign different semantics to correct programs. 
This semantic difference stems primarily from the fact that the interpreter 
assumes all variables to be dynamically scoped, whereas the compiler assumes 
all variables to be local unless explicitly directed otherwise. This difference 
has been the usual practice in Lisp for the sake of convenience and efficiency 
but can lead to very subtle bugs. The definition of Common Lisp avoids such 
anomalies by explicitly requiring the interpreter and compiler to impose 
identical semantics on correct programs so far as possible. 
http://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node6.html#SECTION0051

Given that it was designed as interpreted, compilation was motivated by 
efficiency concerns, and interpreted techniques differed from compiled 
techniques (and in a way that would allow you to redefine and change more stuff 
on the fly), I think it's a reasonable case to say that many Lisps - like all 
the ones before Scheme and CL - were meant to be interpreted and not so much 
compiled.

--
gwern
NATIA DIA Burns espionage 97 utopia orthodox Meade cond SOCIMI


pgpWxdV8bwJV7.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 0/0 1 == False

2008-01-16 Thread Mitar
Hi!

On Jan 11, 2008 7:30 AM, Cristian Baboi [EMAIL PROTECTED] wrote:
 NaN is not 'undefined'

Why not? What is a semantic difference? I believe Haskell should use
undefined instead of NaN for all operations which are mathematically
undefined (like 0/0). NaN should be used in a languages which does not
support such nice Haskell features. Because if Haskell would use
undefined such error would propagate itself to higher levels of
computations, with NaN it does not.

if bigComputation  1
  then ...
  else ...

Would be evaluating else semantically correct if bigComputation
returns NaN? No, it is not. With undefined this is correctly
(not)evaluated.


Mitar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Simulating client server communication with recursive monads

2008-01-16 Thread Ryan Ingram
(redirected to haskell-cafe)

mfix is value recursion, not effect recursion.  It allows you to
tie-the-knot with data being constructed recursively even in a
monadic context.

When you are using the Writer monad like this, the bind operation
between statements in a do construct is just ++.

simulation in your message is

 simulation:: Writer [String] ()
 simulation = mdo
 a - server cr
 cr - client $ take 10 a
 return ()

This is really just the following:

 simulation :: Writer [String] ()
 simulation = Writer result where
 ( a, a_out ) = runWriter (server cr)
 ( cr, cr_out ) = runWriter (client $ take 10 a)
 result = ( (), a_out ++ cr_out )

With mdo you are allowed to have the values refer to each other; the
right hand side of ( a, a_out ) = ... can refer to cr and vice
versa.  But there's no way to follow the thread of computation between
the server and the client with this style.

   -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 0/0 1 == False

2008-01-16 Thread Derek Elkins
On Thu, 2008-01-17 at 03:16 +0100, Mitar wrote:
 Hi!
 
 On Jan 11, 2008 7:30 AM, Cristian Baboi [EMAIL PROTECTED] wrote:
  NaN is not 'undefined'
 
 Why not? What is a semantic difference? I believe Haskell should use
 undefined instead of NaN for all operations which are mathematically
 undefined (like 0/0). NaN should be used in a languages which does not
 support such nice Haskell features. Because if Haskell would use
 undefined such error would propagate itself to higher levels of
 computations, with NaN it does not.
 
 if bigComputation  1
   then ...
   else ...
 
 Would be evaluating else semantically correct if bigComputation
 returns NaN? No, it is not. With undefined this is correctly
 (not)evaluated.

For the love of Pete, floating point numbers are not real numbers.  0/0
is mathematically defined for floating point numbers to be NaN.  If you
don't want to use IEEE floating point numbers, use a different type as
was suggested early in this thread.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 0/0 1 == False

2008-01-16 Thread Felipe Lessa
On Jan 16, 2008 11:30 PM, Derek Elkins [EMAIL PROTECTED] wrote:
 For the love of Pete, floating point numbers are not real numbers.  0/0
 is mathematically defined for floating point numbers to be NaN.  If you
 don't want to use IEEE floating point numbers, use a different type as
 was suggested early in this thread.

In fact, you can be happy just by using Rational

Prelude 0/0 :: Rational
*** Exception: Ratio.%: zero denominator

or creating a newtype

newtype ZeroUndef a = Z {unZ :: a}

instance Eq a = Eq (ZeroUndef a) where
  Z a == Z b = a == b
  Z a /= Z b = a /= b

instance Show a = Show (ZeroUndef a) where
  ...

instance Num a = Num (ZeroUndef a) where
  ...

instance Fractional a = Fractional (ZeroUndef a) where
  ...
  Z a / Z 0 = error ...
  Z a / Z b = Z (a / b)



so that ZeroUndef Double, ZeroUndef Float, ZeroUndef Matrix and all
friends do work like you want.

-- 
Felipe.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Data constructors versus types

2008-01-16 Thread Richard A. O'Keefe

On 17 Jan 2008, at 10:56 am, Peter Verswyvelen wrote:

You're talking about O(big)... But wasn't the C language in some way
succesful because on the hardware at that time other much nicer
languages (e.g. LISP) were just way too slow? Or was this just O(n)
times slower?


No.  C was designed as a Systems Implementation Language (there were  
lots of

SILs) with the following advantages:
(1) tolerable code from a fairly naive compiler (the PDP-11 UNIX V7 C  
compiler
did a little optimisation, but not very much; function entry/ 
exit was done
using calls to library support routines in order to save space,  
so every
function call involved *three* hardware-level function calls; I  
speeded up
a text editor by about 20% by replacing just two tiny functions  
by hand-
written assembler, and it was this function call overhead that  
was saved)
(2) tolerably compact code; a fairly useful library of stuff that you  
didn't
actually have to use (again, said text editor got a notable  
space saving
by *not* using any C stdio stuff at all, not using any floating  
point stuff

at all, and not even linking, let alone calling, malloc())
(3) fairly direct mapping between language and machine so the  
performance model

you had to keep in your head was simple
(4) a fair degree of portability between compilers (although the PC  
world

spoiled this to some extent).

Lisp *performance* compared with C was always O(1) and sometimes  
excellent;
I have had Scheme code running faster than C.  It was the memory  
footprint
caused by Lisp's comparatively large library (possibly even including  
the
compiler) always being there in full, and the (largely imaginary)  
cost of
garbage collection which scared people off.  It is intensely annoying  
to an

old Lisp hacker to see Java succeeding despite being worse at just about
everything Lisp was ever criticised for.  But in fairness, the other  
thing was
that before the advent of Common Lisp, every Lisp was different.   
Develop in
MacLisp and you could forget about delivering in Interlisp, and vice  
versa.
This is why, although I actually have Lazy ML on my machine still, I  
dropped

it for Haskell.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Data constructors versus types

2008-01-16 Thread Richard A. O'Keefe

On 17 Jan 2008, at 12:31 pm, Achim Schneider wrote:

Lisp is actually not really meant to be compiled, but interpreted.



The classic Lisp is Lisp 1.5.
The Lisp 1.5 Programmer's Manual, published in I think 1961,
contains Appendix D: The Lisp Compiler.
If I'm reading appendix G correctly, the compiler was under
4000 words of storage.


The
nice thing is that it doesn't need more than a handful of  
primitives, a
list parser and heap manager/garbage collector and evaluator, which  
all

can be implemented in under 1000 lines of C. Things get more involved
with get/cc, but then how many C programmers ever heard of setjmp...


I have no idea what get/cc might be, unless it is a mistake for call/cc,
but that's Scheme, not Lisp.  Classic Lisp stack management wasn't  
really
any harder than Pascal stack management (in part because classic  
Lisps were

still struggling to get closures right).

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskelldb basic documentation needed

2008-01-16 Thread Steve Lihn
For mysql (via HDBC),some documentation is available here. But it is
rather going through HDBC-ODBC-mysql. It is a bit complex than you
would normally expect with mysql.

http://en.wikibooks.org/wiki/Haskell/Database

2008/1/15 Justin Bailey [EMAIL PROTECTED]:
 2008/1/15 Immanuel Normann [EMAIL PROTECTED]:

  I don't know what pairs of strings this function needs. The API
 description is to unspecific:
 
 
   The connect function takes some driver specific name, value pairs use to
 setup the database connection, and a database action to run.
 
 
  What are the specific name value pairs needed (for a connection to a mysql
 db )?
  Immanuel

 Your best bet is to download the appropriate drivers - either
 haskelld-hdbc-mysql or haskelldb-hsql-mysql. If you get the haskelldb
 sources via darcs, you can also look in the test directory to see how the
 connections are established.

 In my specific case, I am using PostgreSQL and by login  function looks like
 this:

 -- ^ Returns a function which can log into the database and perform
 operations.
 login :: MonadIO m = String - Int - String - String - String -
 (Database - m a) - m a
 login server port user password dbname = postgresqlConnect [(host,
 server),
   (port, show port),
   (user, user),
   (password, password),
   (dbname, dbname)]

 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Poisx select support

2008-01-16 Thread Spencer Janssen
On Wed, Jan 16, 2008 at 02:09:31PM -0600, Galchin Vasili wrote:
 Hi Don,
 
  Sorry ..I  wasn't clear enough.I am trying to determine from the
 Haskell FFI doc what datatype to use in order to model C's void *, e.g.
 for mmap http://www.opengroup.org/onlinepubs/95399/functions/mmap.html
 
 Regards, Vasili

For C's void *, I'd use Ptr ().


Cheers,
Spencer Janssen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Poisx select support

2008-01-16 Thread Bryan O'Sullivan
Spencer Janssen wrote:

 For C's void *, I'd use Ptr ().

Ptr a seems to be more usual, and hews closer to the idea that it's a
pointer to an opaque value.

b
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Data constructors versus types

2008-01-16 Thread Peter Verswyvelen
 ghc --ddump-simpl and assure that your values get unboxed...

I was not really talking about boxed/unboxed values, that's another
issue I think. 

What I was talking about is more related to the work of Neil Mitchell
and Colin Runciman in their static checker for pattern matching
http://www-users.cs.york.ac.uk/~ndm/downloads/paper-unfailing_haskell_a_static_checker_for_pattern_matching-24_sep_2005.pdf

For example, if we would have a language that only knows bits as
datatype, together with data constructors (tagging the bits):

data Number = Int Bits#32
| Float Bits#32

(Int x) + (Int y) = Int (primAddInt32 x y)
(Float x) + (Float y) = Int (primAddFloat32 x y)

etc

(1) Would a sufficiently smart compiler be able to eliminate the tagging
overhead at runtime? Answer: yes?

(2) Would such a language be just as statically safe as Haskell? Answer:
I guess so?

(3) What would the complexity of the compiler given a code size? of n?
Answer: O(???)

(4) Would it be possible to create an incremental compiler that would
reduce the complexity from say quadratic to linear or constant? On very
old computers I worked with great incremental C assemblers and linkers,
where the time needed to recompile/relink was mostly proportional to the
amount of changes one did. I guess current compilers are so complex that
making them incremental would be insane?

Thank you,
Peter

  
  You're talking about O(big)... But wasn't toxbboxhe C language in some way
  succesful because on the hardware at that time other much nicer
  languages (e.g. LISP) were just way too slow? Or was this just O(n)
  times slower? 
  
 Compiler technology also wasn't as advanced as now, Stalin can't
 compile even small programs under say 5 minutes... compare this to e.g.
 TurboPascal, which afair uses three passes: Parsing, Error Reporting,
 Code Generation, it was similar with C compilers back then.
 
 Lisp was fast on lisp machines, where it is the same as what C is to
 Neumann-Architectures: An assembler.
 
 I'm not at all sure about the specific O's involved, but I guess it's
 quite easy to get to NP-complete if you want to do really much without
 much information.
 
 
  IMHO: Shouldn't concepts that are conceptually the same (in this case,
  giving meaning/adding constraints to bits of data ) at runtime and
  compile time look very similar in the language? Most languages require
  completely different syntax and code when you want something to be
  lazy versus strict. Haskell doesn't, you can just add an annotation
  if you want it to be strict, no much rewriting is required. However,
  if I want to change a runtime data constructor definition (and code)
  into a compiletime type, then I can rewrite all of my code basically.
  That is not the case in SCHEME as far as I understand it.
  
 Scheme knows no types but the builtins like INT or PAIR or LIST or
 SYMBOL and so on. Even if you distinguish say
 
 ('complex 1 2)
 from
 ('vec3 1 2 3)
 
 , the compiler in general won't stop you from passing these things into
 the wrong functions. It doesn't even know that a function is passed a
 LIST (QUOTEDSYMBOL INT INT) or LIST (QUOTEDSYMBOL INT INT INT), it
 just sees a pair, in both cases.
 
 Lisp is actually not really meant to be compiled, but interpreted. The
 nice thing is that it doesn't need more than a handful of primitives, a
 list parser and heap manager/garbage collector and evaluator, which all
 can be implemented in under 1000 lines of C. Things get more involved
 with get/cc, but then how many C programmers ever heard of setjmp...
 
 on top of my head, one set of possible primitives is
 
 quote lambda set! succ pred cond.
 
 you can then start by defining define by
 
 (set! 'define (lambda (sym f) (set! sym f)))
 
 There's the wizard book and Graham's Ansi Common Lisp if you're
 interested in how cheap lisp actually is.
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Data constructors versus types

2008-01-16 Thread Anton van Straaten

[EMAIL PROTECTED] wrote:

On 2008.01.17 00:58:19 +0100, [EMAIL PROTECTED] scribbled 0.9K characters:

Achim Schneider writes:
Lisp is actually not really meant to be compiled, but interpreted. 

...

Would you mind stopping to spread dubious truths?

...
I don't think it's a dubious truth. 


It's about as accurate as saying Television is actually not really 
meant to be color, but black and white.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe