Re: [Haskell-cafe] compressed pointers?

2008-04-18 Thread Ketil Malde
Don Stewart [EMAIL PROTECTED] writes:

 One small upside (performance wise), is that the bottom 3 bits of the
 pointer are now used to encode the constructor on 64 bits, so 'case' gets a
 good percent cheaper.

Well - my experience (which is from before this optimization was
added, I think) is that 64bit Haskell is slower than 32bit Haskell.

Anyway, I think this is an orthogonal issue - by limiting your program
to 4GB RAM, 4-byte alignment could give you two bits for pointer tagging
and 32 bit pointers.  If you still do 8-byte alignment, there's no
difference. 

In extremis, you could imagine several different models based on heap
size, with optimal choices for pointer size, alignment and tag bits.
If you shift the pointer, you could address up to 16G from 32 bits (by
using 8-byte alignment and one tag bit).

This probably becomes too complicated, but I thought it was
interesting that the Java people are making use of 32bit pointers on a
64bit system, and are seeing a good performance benefit from it.
 
-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Embedding newlines into a string?

2008-04-18 Thread Ariel J. Birnbaum
 Things to avoid - HaskellWiki - 7 Related Links:
 http://www.haskell.org/haskellwiki/Things_to_avoid#Related_Links
The link was broken (it had an extra chunk of '- Haskell Wiki' ;) )
so I fixed it. For that matter, the Common Hugs Messages link is
broken too but I can't seem to find the page it should point to.
-- 
Ariel J. Birnbaum
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Embedding newlines into a string?

2008-04-18 Thread Benjamin L. Russell
Ariel,

--- Ariel J. Birnbaum [EMAIL PROTECTED] wrote:

  Things to avoid - HaskellWiki - 7 Related Links:
 

http://www.haskell.org/haskellwiki/Things_to_avoid#Related_Links
 The link was broken (it had an extra chunk of '-
 Haskell Wiki' ;) )
 so I fixed it.

Thank you; sorry about the broken link.

 For that matter, the Common Hugs
 Messages link is
 broken too but I can't seem to find the page it
 should point to.

I just fixed it.  It was supposed to be an external
link to the following Web page:

Some common Hugs error messages
http://www.cs.kent.ac.uk/people/staff/sjt/craft2e/errors/allErrors.html

I discovered that link originally under the following
subsection of HaskellWiki:

Learning Haskell - 2 Material - 2.9 Reference
http://www.haskell.org/haskellwiki/Learning_Haskell#Reference

This time, I have checked my updated link to verify
that it works. ;-)

Benjamin L. Russell
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-18 Thread Martin Sulzmann

Lennart Augustsson wrote:
To reuse a favorite word, I think that any implementation that 
distinguishes 'a - b, a - c' from 'a - b c' is broken. :)
It does not implement FD, but something else.  Maybe this something 
else is useful, but if one of the forms is strictly more powerful than 
the other then I don't see why you would ever want the less powerful one.



Do you have any good examples, besides the contrived one

class D a b c | a - b c

instance D a b b = D [a] [b] [b]

where we want to have the more powerful form of multi-range FDs?

Fixing the problem who mention is easy. After all, we know how to derive
improvement for multi-range FDs. But it seems harder to find agreement 
whether

multi-range FDs are short-hands for single-range FDs, or
certain single-range FDs, eg a - b and a - c, are shorthands for more 
powerful

multi-range FDs a - b c.
I clearly prefer the latter, ie have a more powerful form of FDs.

Martin


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GC'ing file handles and other resources

2008-04-18 Thread Duncan Coutts

On Wed, 2008-04-16 at 11:00 +0530, Abhay Parvate wrote:
 Your mail gives me an idea, though I am not an iota familiar with
 compiler/garbage collector internals. Can we have some sort of
 internally maintained priority associated with allocated objects? The
 garbage collector should look at these objects first when it tries to
 free anything. The objects which hold other system resources apart
 from memory, such as file handles, video memory, and so on could be
 allocated as higher priority objects. Is such a thing possible?

The way I have imagined this when I've faced Conal's problem in the past
(again to do with graphics libs and large foreign allocated bitmaps) is
to assign an optional heap memory equivalent cost to ForeignPtrs.
Basically the idea is to take the foreign memory allocations into
account when considering heap pressure. At the moment a ForeignPtr only
counts for about 10 words of heap pressure when of course it can
represent hundreds of kilobytes. So by assigning a cost we can take that
into account with the normal decisions about when to do a minor and
major GC.

We should be able to treat a ForeignPtr to foreign allocated memory in
just the same way as a ForeignPtr to heap allocated memory in terms of
timing/frequency of GC behaviour.

Of course it doesn't help with all resources since not all are
equivalent to memory, eg the arbitrary limits some OSs impose on the
number of open file handles.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage being too strict?

2008-04-18 Thread Duncan Coutts

On Tue, 2008-04-15 at 22:15 -0500, John Goerzen wrote:
 When I went to make my upload of MissingH 1.0.1, Hackage rejected it,
 saying:
 
 Instead of 'ghc-options: -XPatternSignatures' use 'extensions: 
 PatternSignatures'
 
 It hadn't rejected MissingH 1.0.0, even though it had the same thing.

I added loads of extra checks recently.

 Now, my .cabal file has this:
 
  -- Hack because ghc-6.6 and the Cabal the comes with ghc-6.8.1
  -- does not understand the PatternSignatures extension.
  -- The Cabal that comes with ghc-6.8.2 does understand it, so
  -- this hack can be dropped if we require Cabal-Version: =1.2.3
  If impl(ghc = 6.8)
GHC-Options: -XPatternSignatures
 
 which was contributed by Duncan Coutts.

:-)

 It seems arbitrary that Hackage would suddenly reject this valid
 usage.

Yes it is valid though I hope you can see the general intention of the
suggestion. If it were not for the compatibility problem it would be
preferable to use:

iIf impl(ghc = 6.8)
   extensions: PatternSignatures

or just unconditionally if that makes sense:

extensions: PatternSignatures

because it encourages packages to declare what the need in a way that is
not compiler-specific (which was one of the aims of Cabal in the first
place).

 Thoughts?

Mmm. So the problem is that previously the .cabal parser was pretty
unhelpful when it came to forwards compatibility. For example for the
Extension enumeration type it was just using the Read instance which
meant that it would fail with a parse error for any new extensions.
That's the real source of the problem, that the parser allows no
forwards compatibility so when new extensions are added, older Cabal
versions will fail with a parse error.

I have now fixed that by eliminating the use of Read in the .cabal
parser and basically adding an Other/Unknown constructor to several of
the enumeration types, including Extension. So as of Cabal-1.4 it will
be possible to add new extensions in later Cabal versions that are not
in Cabal-1.4 without Cabal-1.4 falling over with a parse error. Indeed,
if the compiler knows about the extension then it will actually work.
The only restriction is that unknown extensions cannot be used in
packages uploaded to hackage, which is pretty reasonable I think. If an
extension is going to be used in widely distributed packages then that
extension should be registered in Language.Haskell.Extension. It's
trivial to add and update hackage to recognise it.

So that obviously does not solve the problem that Cabal-1.2 and older
are not very good with forwards compat in the parser. The solution is
probably just to downgrade that check to a warning rather than outright
rejection (or possibly limit the check to extensions that existed in
older Cabal versions). We can make it stricter again in the future when
Cabal-1.4+ is much more widely deployed.

Sound ok?

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] Hackage being too strict?

2008-04-18 Thread Duncan Coutts

On Fri, 2008-04-18 at 13:59 +0400, Bulat Ziganshin wrote:
 Hello Duncan,
 
 Friday, April 18, 2008, 1:43:24 PM, you wrote:
 
  older Cabal versions). We can make it stricter again in the future when
  Cabal-1.4+ is much more widely deployed.
 
 the problem, imho, is that such tools as Cabal, GHC, Hackage should be
 built with forward and backward compatibility in mind. otherwise,
 Haskell will still remain mostly a hacker tool

Yes! Yes you're absolutely right, that's why I've been fixing it :-)

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] Hackage being too strict?

2008-04-18 Thread Bulat Ziganshin
Hello Duncan,

Friday, April 18, 2008, 1:43:24 PM, you wrote:

 older Cabal versions). We can make it stricter again in the future when
 Cabal-1.4+ is much more widely deployed.

the problem, imho, is that such tools as Cabal, GHC, Hackage should be
built with forward and backward compatibility in mind. otherwise,
Haskell will still remain mostly a hacker tool

-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] compressed pointers?

2008-04-18 Thread Bulat Ziganshin
Hello Ketil,

Friday, April 18, 2008, 10:44:53 AM, you wrote:

 This probably becomes too complicated, but I thought it was
 interesting that the Java people are making use of 32bit pointers on a
 64bit system, and are seeing a good performance benefit from it.

afaik, C compilers support this model too, so it shouldn't too hard to
compile GHC in such mode. it's a bit like small/large memory models of
those 16-bit x86 systems :)


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Wrong Answer Computing Graph Dominators

2008-04-18 Thread ChrisK

More algebraically, including 'or' for symmtry:

and xs = foldr () True xs
or xs = foldr (||) False xs

The True and False are the (monoid) identities with respect to  and || :

True  x == x
x  True == x

False || x == x
x || False == x

And so an empty list, if defined at all, should be the identity:

and [] = True
or [] = False

In english:
and returns false when is any element of the list false? is yes
or returns true when is any element of the list true? is yes

Matthew Brecknell wrote:

Dan Weston wrote:

Here, any path means all paths, a logical conjunction:

and [True, True] = True
and [True  ] = True
and [  ] = True


Kim-Ee Yeoh wrote: 
Hate to nitpick, but what appears to be some kind of a 
limit in the opposite direction is a curious way of arguing 
that: and [] = True.


Surely one can also write

and [False, False] = False
and [False  ] = False
and [  ] = False ???


No. I think what Dan meant was that for all non-null
xs :: [Bool], it is clearly true that:

and (True:xs) == and xs  -- (1)

It therefore makes sense to define (1) to hold also
for empty lists, and since it is also true that:

and (True:[]) == True

We obtain:

and [] == True

Since we can't make any similar claim about the
conjuctions of lists beginning with False, there
is no reasonable argument to the contrary.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage being too strict?

2008-04-18 Thread John Meacham
On Fri, Apr 18, 2008 at 10:43:24AM +0100, Duncan Coutts wrote:
 I have now fixed that by eliminating the use of Read in the .cabal
 parser and basically adding an Other/Unknown constructor to several of
 the enumeration types, including Extension. So as of Cabal-1.4 it will
 be possible to add new extensions in later Cabal versions that are not
 in Cabal-1.4 without Cabal-1.4 falling over with a parse error. Indeed,
 if the compiler knows about the extension then it will actually work.
 The only restriction is that unknown extensions cannot be used in
 packages uploaded to hackage, which is pretty reasonable I think. If an
 extension is going to be used in widely distributed packages then that
 extension should be registered in Language.Haskell.Extension. It's
 trivial to add and update hackage to recognise it.

And then have everyone have to upgrade their cabal?

It should just be 

 newtype Extension = Extension String

it would simplify a lot of code, be more forwards and backwards proof,
and remove oddness like do 'Other PatternGuards' and 'PatternGuards'
mean the same thing?

In order to be both backwards and forwards compatable, everyone writing
haskell code that links against cabal and cares about extensions will
have to have special code treating both the same, and in fact,
conditionally compile part of it, since 'PatternGuards' might not even
be valid in some old version of cabal.

(replace PatternGuards with some other soon to be standardized
extension)

Normalized data types are a good thing. A centralized registry of things
hackage recognizes is fine, but it shouldn't be cluttering up the source
code.

John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage being too strict?

2008-04-18 Thread John Goerzen
On Fri April 18 2008 4:43:24 am Duncan Coutts wrote:
  It seems arbitrary that Hackage would suddenly reject this valid
  usage.

 Yes it is valid though I hope you can see the general intention of the
 suggestion. If it were not for the compatibility problem it would be
 preferable to use:

Sure, I do.  It's a good point.

But I think there are a couple of issues here:

1) Hackage is rejecting things that Cabal accepts without trouble

2) The behavior of Hackage is unpredictable in what it will accept and what 
it will reject

3) The behavior of Hackage changes rapidly

It's been quite frustrating lately as many of my packages that used to upload 
fine still build fine but get rejected at upload time.

I think that Hackage is the wrong place for these checks.  This stuff should 
go in Cabal, and ./setup configure should print a big DEPRECATION WARNING 
for a major release before the stuff gets yanked.

 I have now fixed that by eliminating the use of Read in the .cabal
 parser and basically adding an Other/Unknown constructor to several of
 the enumeration types, including Extension. So as of Cabal-1.4 it will
 be possible to add new extensions in later Cabal versions that are not
 in Cabal-1.4 without Cabal-1.4 falling over with a parse error. Indeed,

That's great news.

 if the compiler knows about the extension then it will actually work.
 The only restriction is that unknown extensions cannot be used in
 packages uploaded to hackage, which is pretty reasonable I think. If an
 extension is going to be used in widely distributed packages then that
 extension should be registered in Language.Haskell.Extension. It's
 trivial to add and update hackage to recognise it.

That makes sense.

 So that obviously does not solve the problem that Cabal-1.2 and older
 are not very good with forwards compat in the parser. The solution is
 probably just to downgrade that check to a warning rather than outright
 rejection (or possibly limit the check to extensions that existed in
 older Cabal versions). We can make it stricter again in the future when
 Cabal-1.4+ is much more widely deployed.

 Sound ok?

Yes, that makes a lot of sense, too.  Can cabal-put be tweaked to make sure 
to output that warning by default?

-- John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] C++ interface with Haskell

2008-04-18 Thread Alfonso Acosta
Although you could use gcc to link the code I wouldn't recommend it
(mainly for the problems you are currently having)

SImply call GHC to compile both the C and Haskell code. It will take
care of finding the headers and supplying the necessary linker
arguments.

ghc -ffi -c   foo.hs myfoo_c.c

BTW, you don't need to compile viaC

2008/4/17 Miguel Lordelo [EMAIL PROTECTED]:
 Well Isaac...I became now a little bit smarter then yesterday!!!

 I show you the example that I found and on which I´m working with.

 File: foo.hs
 module Foo where

 foreign export ccall foo :: Int - IO Int

 foo :: Int - IO Int
 foo n = return (length (f n))

 f :: Int - [Int]
 f 0 = []
 f n = n:(f (n-1))

 To get the C wrapper you insert the following command:
 ghc -ffi -fvia-C -C foo.hs

  After execution you will have these following additional files:

 foo.hc
 foo.hi
 foo_stub.c
 foo_stub.h
 foo_stub.o

 What I did next was to create a file named: myfoo_c.c, where I will call the
 foo function (implemented in Haskell).
  (you can see this example on
 http://www.haskell.org/ghc/docs/latest/html/users_guide/ffi-ghc.html )
 But the problem is to compile with gcc (must I put any flag or whatever set
 something)

 The gcc output is:
 myfoo_c.c:2:19: error: HsFFI.h: No such file or directory

 I downloaded this header file from: (I know that is not the correct way, but
 it was the only idea that occurs at the moment)
 http://www.koders.com/c/fidD0593B84C41CA71319BB079EFD0A2C80211C9337.aspx

 I compiled again and the following return error appears:
 myfoo_c.c:(.text+0x1c): undefined reference to `hs_init'
 myfoo_c.c:(.text+0x31): undefined reference to `foo'
 myfoo_c.c:(.text+0x50): undefined reference to `hs_exit'
  collect2: ld returned 1 exit status

 These functions are necessary to setup GHC runtime (see:
 http://www.haskell.org/ghc/docs/latest/html/users_guide/ffi-ghc.html )

 What I want to know is how to compile myfoo_c.c?! Is it with GCC or GHC?!

 Chears,
 Miguel Lordelo.




 On Wed, Apr 16, 2008 at 9:16 PM, Isaac Dupree [EMAIL PROTECTED]
 wrote:

  perhaps
 
  haskell:
  foreign export foo_func foo :: Int - IO Int
  -- I forget the rest of the syntax here
 
  C++:
 
  extern C {
  int foo_func(int i);
  }
 
  int some_cplusplus_function() {
   int bat = 3;
   int blah = foo_func(bat);
   return blah;
  }
 
 
  Is that all you need to do?
 
 
  Miguel Lordelo wrote:
 
  
  
  
   Hi all,
  
   Well...somehow I'm a beginner in Haskell. But actually my interest in
   Haskell will increase if it is possible to call a haskell function in
 C++.
   Something like GreenCard ( http://www.haskell.org/greencard/ )
 simplifying
   the task of interfacing Haskell programs to external libraries
 (usually).
   But is there also a task to interface a foreign language with Haskell,
 but
   calling Haskell functions. Or c2hs which is an interface generator that
   simplifies the development of Haskell bindings to C libraries.
  
   I want to know this, because in my company some guys are doing some
 testing
   with Frotran and MatLab and I want to show them the power of haskell and
 the
   software which we are using is implemented in C++ (there is the reason
 to
   make Haskel - C++).
  
   I read somewhere that the only way for C++ calling a haskell function is
 to
   create a binding between Haskell and C and from C to C++, but a easy
 Hello
   World example was not there.
   Unfortunatelly I couldn't found anything usefull, like an complete
 example,
   or how to compile the code from haskell to C to C++.
  
   Can sombody help me, please :P
  
   Chears,
   Miguel Lordelo.
  
  
  
   
  
  
   ___
   Haskell-Cafe mailing list
   Haskell-Cafe@haskell.org
   http://www.haskell.org/mailman/listinfo/haskell-cafe
  
 
 


 ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: cpuid 0.2 - Binding for the cpuid machine instruction

2008-04-18 Thread Martin Grabmueller
Hello fellow Haskellers,

I have just uploaded my new package cpuid to Hackage.

Description:

This module provides the function 'cpuid' for accessing information about the 
currently running
IA-32 processor. Both a function for calling the 'cpuid' instruction directly, 
and some convenience
functions for common uses are provided. This package is only portable to IA-32 
machines.

Home page: http://uebb.cs.tu-berlin.de/~magr/projects/cpuid/doc/
Hackage page: 
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/cpuid-0.2


I'm looking forward to comments, improvements and bug fixes.

I hope it's useful to you!

Happy Haskell Hacking,
  Martin



signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] C++ interface with Haskell

2008-04-18 Thread Miguel Lordelo
Thanks,

I found on one site how to compile after creating the stub files with GHC:

First step:
*ghc -c -ffi haskell_file.hs*
Second step - here it is important to know and write where are the ghc
libraries:
*gcc -I /usr/local/lib/ghc-5.04.3/include -c C_file.c *
After that it is important to link my creted C_file with the stub file and
compile it:
*ghc -no-hs-main -o C_file C_file.o haskell_file.o haskell_file_stub.o*

The final result is C_file execution file...just enter C_file and the
program is running correctly.

This information: how to compile and to link C with Haskell and to call a
Haskell funtion from C was quite difficult.
But here is my result of googling throw the internet and to find something
usefull.

Next challange: link C++ with C and creating a usefull documentation and put
it online!

Ciao,
Miguel Lordelo.




On Fri, Apr 18, 2008 at 3:33 PM, Alfonso Acosta [EMAIL PROTECTED]
wrote:

 Although you could use gcc to link the code I wouldn't recommend it
 (mainly for the problems you are currently having)

 SImply call GHC to compile both the C and Haskell code. It will take
 care of finding the headers and supplying the necessary linker
 arguments.

 ghc -ffi -c   foo.hs myfoo_c.c

 BTW, you don't need to compile viaC

 2008/4/17 Miguel Lordelo [EMAIL PROTECTED]:
  Well Isaac...I became now a little bit smarter then yesterday!!!
 
  I show you the example that I found and on which I´m working with.
 
  File: foo.hs
  module Foo where
 
  foreign export ccall foo :: Int - IO Int
 
  foo :: Int - IO Int
  foo n = return (length (f n))
 
  f :: Int - [Int]
  f 0 = []
  f n = n:(f (n-1))
 
  To get the C wrapper you insert the following command:
  ghc -ffi -fvia-C -C foo.hs
 
   After execution you will have these following additional files:
 
  foo.hc
  foo.hi
  foo_stub.c
  foo_stub.h
  foo_stub.o
 
  What I did next was to create a file named: myfoo_c.c, where I will call
 the
  foo function (implemented in Haskell).
   (you can see this example on
  http://www.haskell.org/ghc/docs/latest/html/users_guide/ffi-ghc.html )
  But the problem is to compile with gcc (must I put any flag or whatever
 set
  something)
 
  The gcc output is:
  myfoo_c.c:2:19: error: HsFFI.h: No such file or directory
 
  I downloaded this header file from: (I know that is not the correct way,
 but
  it was the only idea that occurs at the moment)
  http://www.koders.com/c/fidD0593B84C41CA71319BB079EFD0A2C80211C9337.aspx
 
  I compiled again and the following return error appears:
  myfoo_c.c:(.text+0x1c): undefined reference to `hs_init'
  myfoo_c.c:(.text+0x31): undefined reference to `foo'
  myfoo_c.c:(.text+0x50): undefined reference to `hs_exit'
   collect2: ld returned 1 exit status
 
  These functions are necessary to setup GHC runtime (see:
  http://www.haskell.org/ghc/docs/latest/html/users_guide/ffi-ghc.html )
 
  What I want to know is how to compile myfoo_c.c?! Is it with GCC or
 GHC?!
 
  Chears,
  Miguel Lordelo.
 
 
 
 
  On Wed, Apr 16, 2008 at 9:16 PM, Isaac Dupree [EMAIL PROTECTED]
  wrote:
 
   perhaps
  
   haskell:
   foreign export foo_func foo :: Int - IO Int
   -- I forget the rest of the syntax here
  
   C++:
  
   extern C {
   int foo_func(int i);
   }
  
   int some_cplusplus_function() {
int bat = 3;
int blah = foo_func(bat);
return blah;
   }
  
  
   Is that all you need to do?
  
  
   Miguel Lordelo wrote:
  
   
   
   
Hi all,
   
Well...somehow I'm a beginner in Haskell. But actually my interest
 in
Haskell will increase if it is possible to call a haskell function
 in
  C++.
Something like GreenCard ( http://www.haskell.org/greencard/ )
  simplifying
the task of interfacing Haskell programs to external libraries
  (usually).
But is there also a task to interface a foreign language with
 Haskell,
  but
calling Haskell functions. Or c2hs which is an interface generator
 that
simplifies the development of Haskell bindings to C libraries.
   
I want to know this, because in my company some guys are doing some
  testing
with Frotran and MatLab and I want to show them the power of haskell
 and
  the
software which we are using is implemented in C++ (there is the
 reason
  to
make Haskel - C++).
   
I read somewhere that the only way for C++ calling a haskell
 function is
  to
create a binding between Haskell and C and from C to C++, but a easy
  Hello
World example was not there.
Unfortunatelly I couldn't found anything usefull, like an complete
  example,
or how to compile the code from haskell to C to C++.
   
Can sombody help me, please :P
   
Chears,
Miguel Lordelo.
   
   
   
   
 
   
   
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
   
  
  
 
 
  

Re[2]: [Haskell-cafe] C++ interface with Haskell

2008-04-18 Thread Bulat Ziganshin
Hello Miguel,

Friday, April 18, 2008, 7:06:07 PM, you wrote:

you may look into my freearc.org project

overall, nothing complex as far as you got it :) i use

ghc -c c_file.cpp
ghc --make main.hs c_file.o

in order to call from C++ to Haskell or vice versa you should define
function in C++ as having extern C linkage. i recommend you to
declare function in header file which is able to compile either in C++
mode (used in first step) or C mode (used in second step, when
compiling main.hs):

#ifdef  __cplusplus
extern C {
#endif
void myfunc(void);
#ifdef  __cplusplus
}
#endif


then you use either foreign import haskell statement to use C++ func
from haskell or foreign export for other way. i also recommend you
to use main procedure written in haskell and run from this procedure
your main C function - this is the simplest way to initialize Haskell
runtime system. that's all


 Thanks,

 I found on one site how to compile after creating the stub files with GHC:

 First step:
 ghc -c -ffi haskell_file.hs
 Second step - here it is important to know and write where are the ghc 
 libraries:
  gcc -I /usr/local/lib/ghc-5.04.3/include -c C_file.c 
 After that it is important to link my creted C_file with the stub file and 
 compile it:
 ghc -no-hs-main -o C_file C_file.o haskell_file.o haskell_file_stub.o
  
 The final result is C_file execution file...just enter C_file and the program 
 is running correctly.

 This information: how to compile and to link C with Haskell and to
 call a Haskell funtion from C was quite difficult.
  But here is my result of googling throw the internet and to find something 
 usefull.

 Next challange: link C++ with C and creating a usefull documentation and put 
 it online!

 Ciao,
 Miguel Lordelo.


  

 On Fri, Apr 18, 2008 at 3:33 PM, Alfonso Acosta [EMAIL PROTECTED] wrote:
  Although you could use gcc to link the code I wouldn't recommend it
  (mainly for the problems you are currently having)
  
  SImply call GHC to compile both the C and Haskell code. It will take
  care of finding the headers and supplying the necessary linker
  arguments.
  
  ghc -ffi -c   foo.hs myfoo_c.c
  
  BTW, you don't need to compile viaC
  
  2008/4/17 Miguel Lordelo [EMAIL PROTECTED]:
  
 Well Isaac...I became now a little bit smarter then yesterday!!!
 
  I show you the example that I found and on which I?m working with.
 
  File: foo.hs
  module Foo where
 
  foreign export ccall foo :: Int - IO Int
 
  foo :: Int - IO Int
  foo n = return (length (f n))
 
  f :: Int - [Int]
  f 0 = []
  f n = n:(f (n-1))
 
  To get the C wrapper you insert the following command:
  ghc -ffi -fvia-C -C foo.hs
 
   After execution you will have these following additional files:
 
  foo.hc
  foo.hi
  foo_stub.c
  foo_stub.h
  foo_stub.o
 
  What I did next was to create a file named: myfoo_c.c, where I will call the
  foo function (implemented in Haskell).
   (you can see this example on
  http://www.haskell.org/ghc/docs/latest/html/users_guide/ffi-ghc.html )
  But the problem is to compile with gcc (must I put any flag or whatever set
  something)
 
  The gcc output is:
  myfoo_c.c:2:19: error: HsFFI.h: No such file or directory
 
  I downloaded this header file from: (I know that is not the correct way, but
  it was the only idea that occurs at the moment)
  http://www.koders.com/c/fidD0593B84C41CA71319BB079EFD0A2C80211C9337.aspx
 
  I compiled again and the following return error appears:
  myfoo_c.c:(.text+0x1c): undefined reference to `hs_init'
  myfoo_c.c:(.text+0x31): undefined reference to `foo'
  myfoo_c.c:(.text+0x50): undefined reference to `hs_exit'
   collect2: ld returned 1 exit status
 
  These functions are necessary to setup GHC runtime (see:
  http://www.haskell.org/ghc/docs/latest/html/users_guide/ffi-ghc.html )
 
  What I want to know is how to compile myfoo_c.c?! Is it with GCC or GHC?!
 
  Chears,
  Miguel Lordelo.
 
 
 
 
  On Wed, Apr 16, 2008 at 9:16 PM, Isaac Dupree [EMAIL PROTECTED]
  wrote:
 
   perhaps
  
   haskell:
   foreign export foo_func foo :: Int - IO Int
   -- I forget the rest of the syntax here
  
   C++:
  
   extern C {
   int foo_func(int i);
   }
  
   int some_cplusplus_function() {
    int bat = 3;
    int blah = foo_func(bat);
    return blah;
   }
  
  
   Is that all you need to do?
  
  
   Miguel Lordelo wrote:
  
   
   
   
Hi all,
   
Well...somehow I'm a beginner in Haskell. But actually my interest in
Haskell will increase if it is possible to call a haskell function in
  C++.
Something like GreenCard ( http://www.haskell.org/greencard/ )
  simplifying
the task of interfacing Haskell programs to external libraries
  (usually).
But is there also a task to interface a foreign language with Haskell,
  but
calling Haskell functions. Or c2hs which is an interface generator that
simplifies the development of Haskell bindings to C libraries.
   
I want to know this, because in my company some guys are doing some
  testing
with 

Re: [Haskell-cafe] Hackage being too strict?

2008-04-18 Thread Duncan Coutts
In message [EMAIL PROTECTED] John Goerzen
[EMAIL PROTECTED] writes:
 On Fri April 18 2008 4:43:24 am Duncan Coutts wrote:
   It seems arbitrary that Hackage would suddenly reject this valid
   usage.
 
  Yes it is valid though I hope you can see the general intention of the
  suggestion. If it were not for the compatibility problem it would be
  preferable to use:
 
 Sure, I do.  It's a good point.
 
 But I think there are a couple of issues here:
 
 1) Hackage is rejecting things that Cabal accepts without trouble

This is by design. There are dubious things you can do locally that are not
really acceptable in publicly distributed packages. As a simple example you
can't put 'AllRightsReserved' packages on the public hackage.

Hackage upload is an excellent point to apply stricter QA for distributed
packages than what you'd use locally for quick hacks.

Remember that Cabal is aimed as a haskell build system as well as a haskell
package distribution system. So it does not make sense in every context to apply
the strictest levels of checking. For example in the longer term I'd like to see
'cabal build Foo.hs' as an upgraded 'ghc --make Foo.hs' ie just the ordinary
build stuff - but with parallel build and understanding preprocessors and
without having to supply a .cabal file.

So the Cabal checking code has a few levels of severity and depending on the
context we make those fatal errors, warnings or ignore them completely. The
strictest levels of checks are reserved for distributing packages. It's easy to
adjust the levels for individual checks.

 2) The behavior of Hackage is unpredictable in what it will accept and what 
 it will reject

Actually with the latest Cabal it's quite predictable. 'cabal sdist' now reports
exactly the same errors and warnings as hackage upload. You see the difference
at the moment because you're using an older version of Cabal compared to the one
on hackage.

There is also a new 'cabal check' command for running these additional QA checks
(which we hope to extend with more expensive checks along the lines of
autotools's 'make distcheck' feature).

 3) The behavior of Hackage changes rapidly

They will remain synchronised because Hackage just uses the Cabal library to do
its checks.

It changed very rapidly recently because we added this checking infrastructure
and added dozens of new checks based on looking at the dubious things people
have put in existing .cabal files in hackage packages.

 It's been quite frustrating lately as many of my packages that used to upload 
 fine still build fine but get rejected at upload time.

If there are any that you think are rejecting legitimate packages then do
complain (as in this thread). They're not set in stone. Probably better to
complain to cabal-devel or the libraries list, or file bug repots so we spot
them. I do realise there is the danger of going too far and having pointless
finicky rules. We rely on feedback on this.
 
 I think that Hackage is the wrong place for these checks.

I disagree. I think it's absolutely the best place for these checks. Of course
they need to be in the client too, that's coming soon with Cabal-1.4 (or now if
you use the development version of Cabal).

As an ex-linux-distro maintainer I think this is absolutely the right thing to
do - to automate and distribute QA as much as possible. Maintaining and
improving the quality of the hackage collection is really important.

 This stuff should go in Cabal, and ./setup configure should print a big
 DEPRECATION WARNING for a major release before the stuff gets yanked.

Yes, that's what will happen when you use the new Cabal (the same version that
hackage is using). Though it will not print all warnings on configure because we
think that'd probably be too annoying for people working on quick hacks. But it
does run the full set of checks with the 'check' and 'sdist' commands.

  So that obviously does not solve the problem that Cabal-1.2 and older
  are not very good with forwards compat in the parser. The solution is
  probably just to downgrade that check to a warning rather than outright
  rejection (or possibly limit the check to extensions that existed in
  older Cabal versions). We can make it stricter again in the future when
  Cabal-1.4+ is much more widely deployed.
 
  Sound ok?
 
 Yes, that makes a lot of sense, too.  Can cabal-put be tweaked to make sure 
 to output that warning by default?

There's a bug open on that and Ross and Bjorn are working on getting the cgi
upload script to reporte errors/warnings to http clients that only accept text
not html output (ie cabal upload).


Does that help explain what's going on and what we're up to with this checking
stuff?

I should also note that there will be a Cabal-1.4 and cabal-install release in
the near future but you can grab the pre-releases or the darcs versions now and
try things out and report any problems. The first pre-release was announced to
the libraries list a bit over a week ago. I'll 

Re: [Haskell-cafe] C++ interface with Haskell

2008-04-18 Thread Isaac Dupree
if you'd normally be linking using g++, you'll need (IIRC) -lstdc++ 
added to linking-ghc's command line


Alfonso Acosta wrote:

Although you could use gcc to link the code I wouldn't recommend it
(mainly for the problems you are currently having)

SImply call GHC to compile both the C and Haskell code. It will take
care of finding the headers and supplying the necessary linker
arguments.

ghc -ffi -c   foo.hs myfoo_c.c

BTW, you don't need to compile viaC

2008/4/17 Miguel Lordelo [EMAIL PROTECTED]:

Well Isaac...I became now a little bit smarter then yesterday!!!

I show you the example that I found and on which I´m working with.

File: foo.hs
module Foo where

foreign export ccall foo :: Int - IO Int

foo :: Int - IO Int
foo n = return (length (f n))

f :: Int - [Int]
f 0 = []
f n = n:(f (n-1))

To get the C wrapper you insert the following command:
ghc -ffi -fvia-C -C foo.hs

 After execution you will have these following additional files:

foo.hc
foo.hi
foo_stub.c
foo_stub.h
foo_stub.o

What I did next was to create a file named: myfoo_c.c, where I will call the
foo function (implemented in Haskell).
 (you can see this example on
http://www.haskell.org/ghc/docs/latest/html/users_guide/ffi-ghc.html )
But the problem is to compile with gcc (must I put any flag or whatever set
something)

The gcc output is:
myfoo_c.c:2:19: error: HsFFI.h: No such file or directory

I downloaded this header file from: (I know that is not the correct way, but
it was the only idea that occurs at the moment)
http://www.koders.com/c/fidD0593B84C41CA71319BB079EFD0A2C80211C9337.aspx

I compiled again and the following return error appears:
myfoo_c.c:(.text+0x1c): undefined reference to `hs_init'
myfoo_c.c:(.text+0x31): undefined reference to `foo'
myfoo_c.c:(.text+0x50): undefined reference to `hs_exit'
 collect2: ld returned 1 exit status

These functions are necessary to setup GHC runtime (see:
http://www.haskell.org/ghc/docs/latest/html/users_guide/ffi-ghc.html )

What I want to know is how to compile myfoo_c.c?! Is it with GCC or GHC?!

Chears,
Miguel Lordelo.




On Wed, Apr 16, 2008 at 9:16 PM, Isaac Dupree [EMAIL PROTECTED]
wrote:


perhaps

haskell:
foreign export foo_func foo :: Int - IO Int
-- I forget the rest of the syntax here

C++:

extern C {
int foo_func(int i);
}

int some_cplusplus_function() {
 int bat = 3;
 int blah = foo_func(bat);
 return blah;
}


Is that all you need to do?


Miguel Lordelo wrote:




Hi all,

Well...somehow I'm a beginner in Haskell. But actually my interest in
Haskell will increase if it is possible to call a haskell function in

C++.

Something like GreenCard ( http://www.haskell.org/greencard/ )

simplifying

the task of interfacing Haskell programs to external libraries

(usually).

But is there also a task to interface a foreign language with Haskell,

but

calling Haskell functions. Or c2hs which is an interface generator that
simplifies the development of Haskell bindings to C libraries.

I want to know this, because in my company some guys are doing some

testing

with Frotran and MatLab and I want to show them the power of haskell and

the

software which we are using is implemented in C++ (there is the reason

to

make Haskel - C++).

I read somewhere that the only way for C++ calling a haskell function is

to

create a binding between Haskell and C and from C to C++, but a easy

Hello

World example was not there.
Unfortunatelly I couldn't found anything usefull, like an complete

example,

or how to compile the code from haskell to C to C++.

Can sombody help me, please :P

Chears,
Miguel Lordelo.






___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe






___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage being too strict?

2008-04-18 Thread Duncan Coutts
In message [EMAIL PROTECTED] haskell-cafe@haskell.org
writes:
 On Fri, Apr 18, 2008 at 10:43:24AM +0100, Duncan Coutts wrote:
  I have now fixed that by eliminating the use of Read in the .cabal
  parser and basically adding an Other/Unknown constructor to several of
  the enumeration types, including Extension. So as of Cabal-1.4 it will
  be possible to add new extensions in later Cabal versions that are not
  in Cabal-1.4 without Cabal-1.4 falling over with a parse error. Indeed,
  if the compiler knows about the extension then it will actually work.
  The only restriction is that unknown extensions cannot be used in
  packages uploaded to hackage, which is pretty reasonable I think. If an
  extension is going to be used in widely distributed packages then that
  extension should be registered in Language.Haskell.Extension. It's
  trivial to add and update hackage to recognise it.
 
 And then have everyone have to upgrade their cabal?

No, that's the nice thing about the changes I've made already. It already works
(in Cabal-1.4) to use the PArr extension that ghc-6.8.2 supports but is not
finalised yet and is therefore not listed in Language.Haskell.Extension.

So only the list that the hackage server itself knows about has to be updated so
that it can 

 It should just be 
 
  newtype Extension = Extension String

Perhaps but the main point I think this that Cabal/Hackage needs to know both
what the global list of known extensions and what extensions are supported by
particular versions of compilers.

It's obviously legitimate for compilers to add new extensions that are not
globally registered (and that works now in Cabal-1.4) but I don't think it's
legitimate to have such packages be uploaded to hackage. If they're going to be
publicly distributed then the extensions they use should be known too.

Then there's the simple matter of reporting to users that they've mis-spelled an
extension. We need enough information about known extensions to be able to do 
that.

 it would simplify a lot of code, be more forwards and backwards proof,
 and remove oddness like do 'Other PatternGuards' and 'PatternGuards'
 mean the same thing?
 
 In order to be both backwards and forwards compatable, everyone writing
 haskell code that links against cabal and cares about extensions will
 have to have special code treating both the same, and in fact,
 conditionally compile part of it, since 'PatternGuards' might not even
 be valid in some old version of cabal.
 
 (replace PatternGuards with some other soon to be standardized
 extension)
 
 Normalized data types are a good thing. A centralized registry of things
 hackage recognizes is fine, but it shouldn't be cluttering up the source
 code.

Yeah maybe. I don't really care about the representation so long as we can all
agree about what users/packagers/hackage/compilers want from extensions.

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] announcing franchise 0.0

2008-04-18 Thread David Roundy
I'm pleased to announce the existence (not release, properly) of
franchise, a new configuration/build system for Haskell programs and
packages.

Franchise
=

Franchise is a configuration and build system for Haskell projects.  The
configure system employed in franchise is designed to be easily forward and
backward compatible, meaning that you shouldn't need to change your
Setup.hs file in order to compile with a new version of ghc, and if you
*do* need to make a change in your Setup.hs file, it shouldn't force users
who have an older version of franchise and/or ghc to upgrade either their
compiler or their copy of franchise.  The latter goal is really only going
to be realized if and when a stable version of franchise is released... as
it is currently is something of a pre-alpha state (but useable, for
instance, for compiling darcs).

One goal of franchise is to not require developers to provide redundant
information.  For instance, you've already listed all the modules you use,
and ghc already knows which modules are present in which packages, so
there's in general no need for you to list the packages that you require,
much less their versions.  This enhances both forwards and backwards
compatibility, and just plain makes your life easier.  If a particular
module is provided by more than one package, you may need to disambiguate,
but that's not the common case.

Perhaps also worth mentioning is that franchise supports parallel builds
similar to make -j.  Currently the number of simultaneous builds is fixed
at four.  Franchise does not, however, compute an optimized parallel build
order, with the result that on the darcs repository a franchise build is a
few percent slower than make -j4.

Franchise is currently ghc-specific and won't run on Windows, but patches
to extend either of these limitations would be welcome.  It also currently
won't work when any flags contain space characters (e.g. with
--prefix=/home/user/My stuff), but the fix for lack of space support is
the same as the fix for Windows support, so far as I can tell.

The package name franchise stands for Fun, relaxing and calming Haskell
into Saturday evening.  It is also something of an antonym of cabal,
since franchise means the right to vote.  Which also fits in with the
concept of allowing the code to decide on its own dependencies.  Franchise
is made up of some pretty ugly code, with a small amount of pretty
beautiful code.  But it was all code that was fun and relaxing to write.

It you want to have argumentative, stressful conversations, please don't do
so on the subject of fun, relaxing and calming code.

Note: franchise is almost entirely undocumented.  It does only export a
couple of dozen functions, but still might be hard to learn to use.  This
is because writing documentation is not as fun, relaxing or calming as
writing Haskell.  Also, franchise is not yet at the stage where it's likely
to be useful to you without any features added, unless you've got a very
simple project, in which case you should be able to pretty easily copy and
modify an existing franchise Setup.hs file.

To get franchise, run

darcs get http://darcs.net/repos/franchise

(note that this will require darcs 2.0.0)

To build and install franchise, simply execute

runghc Setup.hs install --user --prefix=$HOME

if you don't want to actually install it, just run

runghc Setup.hs build

I think that's all (and obviously, in no particular order).  I hope you
enjoy franchise, or if you don't enjoy franchise, I hope you don't tell me
about it.

David Roundy

P.S. A franchise build file for darcs is included below.  This is a
work-in-progress.  It doesn't build the documentation, doesn't allow the
user to configure on the command-line which packages they want to use
(e.g. bytestring/curl/libwww) and doesn't build the Workaround.hs module
which codes replacements for missing or broken library functions.  But it
*is* able to build darcs, and if these features are added, it will be able
to replace darcs' configure and build system without loss of features or
compatibility.

#!/usr/bin/runhaskell
import Distribution.Franchise

configure = do findPackagesFor src/darcs.lhs
   addEnv GHC_FLAGS -DPACKAGE_VERSION=\2.0.0\
   addEnv GHC_FLAGS -O2 -Wall -Werror
   addEnv CFLAGS -DPACKAGE_VERSION=\2.0.0\
   -- look for libz
   checkLib z zlib.h gzopen(\temp\,\w\)
   -- look for libwww
   do systemOut libwww-config [--cflags] = addEnv CFLAGS
  systemOut libwww-config [--libs] = addEnv LDFLAGS
  addEnv GHC_FLAGS -DHAVE_LIBWWW
  putStrLn Found libwww
 `catch` \_ - putStrLn Libwww isn't present!
   -- look for libcurl
   do systemOut curl-config [--cflags] = addEnv GHC_FLAGS
  systemOut curl-config [--cflags] = addEnv CFLAGS
  systemOut curl-config [--libs] = addEnv LDFLAGS
 

Re: [Haskell-cafe] Help with associated types

2008-04-18 Thread Emil Axelsson

After some thinking I think I can put my question much simpler:

If I have a class with some dependencies, say

  a - ..., b c - ...

Is it possible to encode this using associated types without having all of a, b 
and c as class parameters?


It seems to me that it's not possible. And if so, I'll simply drop this idea 
(was hoping that ATs would allow me to have fewer class parameters).


Thanks,

/ Emil


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] C++ interface with Haskell

2008-04-18 Thread Bulat Ziganshin
Hello Isaac,

Friday, April 18, 2008, 7:27:56 PM, you wrote:

absolutely true! it's required if you use new/delete and other things
supported by c++ RTS

 if you'd normally be linking using g++, you'll need (IIRC) -lstdc++ 
 added to linking-ghc's command line

 Alfonso Acosta wrote:
 Although you could use gcc to link the code I wouldn't recommend it
 (mainly for the problems you are currently having)
 
 SImply call GHC to compile both the C and Haskell code. It will take
 care of finding the headers and supplying the necessary linker
 arguments.
 
 ghc -ffi -c   foo.hs myfoo_c.c
 
 BTW, you don't need to compile viaC
 
 2008/4/17 Miguel Lordelo [EMAIL PROTECTED]:
 Well Isaac...I became now a little bit smarter then yesterday!!!

 I show you the example that I found and on which I?m working with.

 File: foo.hs
 module Foo where

 foreign export ccall foo :: Int - IO Int

 foo :: Int - IO Int
 foo n = return (length (f n))

 f :: Int - [Int]
 f 0 = []
 f n = n:(f (n-1))

 To get the C wrapper you insert the following command:
 ghc -ffi -fvia-C -C foo.hs

  After execution you will have these following additional files:

 foo.hc
 foo.hi
 foo_stub.c
 foo_stub.h
 foo_stub.o

 What I did next was to create a file named: myfoo_c.c, where I will call the
 foo function (implemented in Haskell).
  (you can see this example on
 http://www.haskell.org/ghc/docs/latest/html/users_guide/ffi-ghc.html )
 But the problem is to compile with gcc (must I put any flag or whatever set
 something)

 The gcc output is:
 myfoo_c.c:2:19: error: HsFFI.h: No such file or directory

 I downloaded this header file from: (I know that is not the correct way, but
 it was the only idea that occurs at the moment)
 http://www.koders.com/c/fidD0593B84C41CA71319BB079EFD0A2C80211C9337.aspx

 I compiled again and the following return error appears:
 myfoo_c.c:(.text+0x1c): undefined reference to `hs_init'
 myfoo_c.c:(.text+0x31): undefined reference to `foo'
 myfoo_c.c:(.text+0x50): undefined reference to `hs_exit'
  collect2: ld returned 1 exit status

 These functions are necessary to setup GHC runtime (see:
 http://www.haskell.org/ghc/docs/latest/html/users_guide/ffi-ghc.html )

 What I want to know is how to compile myfoo_c.c?! Is it with GCC or GHC?!

 Chears,
 Miguel Lordelo.




 On Wed, Apr 16, 2008 at 9:16 PM, Isaac Dupree [EMAIL PROTECTED]
 wrote:

 perhaps

 haskell:
 foreign export foo_func foo :: Int - IO Int
 -- I forget the rest of the syntax here

 C++:

 extern C {
 int foo_func(int i);
 }

 int some_cplusplus_function() {
  int bat = 3;
  int blah = foo_func(bat);
  return blah;
 }


 Is that all you need to do?


 Miguel Lordelo wrote:



 Hi all,

 Well...somehow I'm a beginner in Haskell. But actually my interest in
 Haskell will increase if it is possible to call a haskell function in
 C++.
 Something like GreenCard ( http://www.haskell.org/greencard/ )
 simplifying
 the task of interfacing Haskell programs to external libraries
 (usually).
 But is there also a task to interface a foreign language with Haskell,
 but
 calling Haskell functions. Or c2hs which is an interface generator that
 simplifies the development of Haskell bindings to C libraries.

 I want to know this, because in my company some guys are doing some
 testing
 with Frotran and MatLab and I want to show them the power of haskell and
 the
 software which we are using is implemented in C++ (there is the reason
 to
 make Haskel - C++).

 I read somewhere that the only way for C++ calling a haskell function is
 to
 create a binding between Haskell and C and from C to C++, but a easy
 Hello
 World example was not there.
 Unfortunatelly I couldn't found anything usefull, like an complete
 example,
 or how to compile the code from haskell to C to C++.

 Can sombody help me, please :P

 Chears,
 Miguel Lordelo.



 


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe


 

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-18 Thread Iavor Diatchki
Hello,

On Thu, Apr 17, 2008 at 12:05 PM, Martin Sulzmann
[EMAIL PROTECTED] wrote:
  Can you pl specify the improvement rules for your interpretation of FDs.
 That would help!

Each functional dependency on a class adds one extra axiom to the
system (aka CHR rule, improvement rule).  For the example in question
we have:

class D a b | a - b where ...

the extra axiom is:

forall a b c. (D a b, D a c) = (b = c)

This is the definition of functional dependency---it specifies that
the relation 'D' is functional.  An improvement rule follows from a
functional dependency if it can be derived from this rule.  For
example, if we have an instance (i.e., another axiom):

instance D Char Bool

Then we can derive the following theorem:

(D Char a) = (a = Bool)

I think that in the CHR paper this was called instance improvement.
Note that this is not an extra axiom but rather a theorem---adding it
to the system as an axiom does not make the system any more
expressive.  Now consider what happens when we have a qualified
instance:

instance D a a = D [a] [a]

We can combine this with the FD axiom to get:

(D a a, D [a] b) = b = [a]

This is all that follows from the functional dependency.  Of course,
in the presence of other instances, we could obtain more improvement
rules.

As for the consistency rule, it is intended to ensure that instances
are consistent with the FD axiom.  As we saw from the previous
examples, it is a bit conservative in that it rejects some instances
that do not violate the functional dependency.   Now, we could choose
to exploit this fact to compute stronger improvement rules---nothing
wrong with that.  However, this goes beyond FDs.

-Iavor









  I'm simply following Mark Jones' style FDs.

  Mark's ESOP'00 paper has a consistency condition:
  If two instances match on the FD domain then the must also match on their
 range.
  The motivation for this condition is to avoid inconsistencies when
  deriving improvement rules from instances.

  For




  class D a b | a - b

  instance D a a = D [a] [a]
  instance D [Int] Char


  we get

  D [a] b == b =[a]
  D [Int] b == b=Char

  In case of

  D [Int] b we therefore get b=Char *and* b =[a] which leads to a
 (unification) error.
  The consistency condition avoids such situations.


  The beauty of formalism FDs with CHRs (or type functions/families) is that
  the whole improvement process becomes explicit. Of course, it has to match
  the programmer's intuition. See the discussion regarding multi-range FDs.

  Martin


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-18 Thread Lennart Augustsson
I've never thought of one being shorthand for the other, really.
Since they are logically equivalent (in my interpretation) I don't really
care which one we regard as more primitive.

On Fri, Apr 18, 2008 at 9:26 AM, Martin Sulzmann [EMAIL PROTECTED]
wrote:

 Lennart Augustsson wrote:

  To reuse a favorite word, I think that any implementation that
  distinguishes 'a - b, a - c' from 'a - b c' is broken. :)
  It does not implement FD, but something else.  Maybe this something else
  is useful, but if one of the forms is strictly more powerful than the other
  then I don't see why you would ever want the less powerful one.
 
   Do you have any good examples, besides the contrived one

 class D a b c | a - b c

 instance D a b b = D [a] [b] [b]

 where we want to have the more powerful form of multi-range FDs?

 Fixing the problem who mention is easy. After all, we know how to derive
 improvement for multi-range FDs. But it seems harder to find agreement
 whether
 multi-range FDs are short-hands for single-range FDs, or
 certain single-range FDs, eg a - b and a - c, are shorthands for more
 powerful
 multi-range FDs a - b c.
 I clearly prefer the latter, ie have a more powerful form of FDs.

 Martin


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] announcing franchise 0.0

2008-04-18 Thread Duncan Coutts
In message [EMAIL PROTECTED] haskell-cafe@haskell.org,
[EMAIL PROTECTED] writes:

 One goal of franchise is to not require developers to provide redundant
 information.  For instance, you've already listed all the modules you use,
 and ghc already knows which modules are present in which packages, so
 there's in general no need for you to list the packages that you require,
 much less their versions.

Yeah, this is an important point. Part of the original idea of Cabal was to help
distribution by properly tracking dependencies. It's not too bad for that use
case but it's really a pain in comparison to ghc --make or hmake which just do
the right thing given the environment.

Part of our plan for Cabal 2.x is to do proper module dependency chasing so that
it will work without a .cabal file in simple cases and be usable for the ghc
--make or hmake use cases (but with support for preprocessors and parallel
build). It should be possible to just start hacking and derive a skeleton .cabal
file afterwards if you decide you want to distribute a package.

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-18 Thread Martin Sulzmann

Thanks Iavor! Things become now clear.

Let's consider our running example

class D a b | a - b
instance D a b = D [a] [b]

which we can write in CHR notation

D a b, D a c == b=c(FD)

D [a] [b] = D a b   (Inst)

These rules overlap.

Let's consider the critical pair

D [a] [b], D [a] c

The following two derivations are possible

   D [a] [b], D [a] c
  --FD   D [a] [b], c = [b]
  --Inst   D a b, c = [b]


  D [a] [b], D [a] c
  --Inst D a b, D [a] c

The two final stores differ which means that the
CHR system is non-confluent. Hence, the
proof theory is (potentially) incomplete.
What does this mean?
Logically true statement may not be derivable
using our CHR/typeclass-FD solver.

Iavor suggests to add the rule

D [a] c, D a b == c = [b](Imp1)

Makes perfect sense!

This rule is indeed a theorem and makes the system confluent.

But that's not what the FD-CHR paper does.

The FD-CHR paper generates the stronger rule

D [a] c == c = [b] (Imp2)

This rule is NOT a theorem (ie logical consequence from the
FD and Inst rule).
But this rule also makes the system confluent.

Why does the FD-CHR paper suggest this rule.
Some reasons:

- the (Imp2) matches the GHC and I believe also Hugs implementation
- the (Imp2) is easier to implement, we only need to look for
  a single constraint when firing the rule
- (Arguable) The (Imp2) matches better the programmer's intuition.
  We only consider the instance head when generating improvement
  but ignore the instance context.
- (Historical note: The rule suggested by Iavor were discussed
  when writing the FD-CHR paper but somehow we never
  pursued this alternative design space.
  I have to dig out some old notes, maybe there some other reasons,
  infinite completion, why this design space wasn't pursued).

To summarize, I see now where the confusion lies.
The FD-CHR studies a stronger form of FDs where the CHR
improvement rules generated guarantee confluence but the
rules are not necessarily logical consequence.
Therefore, the previously discussed property

 a - b and a - c iff a - b c

does of course NOT hold. That is,
the combination of improvement rules derived from a - b and a - c
is NOT equivalent to the improvement rules derived from a - b c.
Logically, the equivalence obviously holds.

Martin


Iavor Diatchki wrote:

Hello,

On Thu, Apr 17, 2008 at 12:05 PM, Martin Sulzmann
[EMAIL PROTECTED] wrote:
  

 Can you pl specify the improvement rules for your interpretation of FDs.
That would help!



Each functional dependency on a class adds one extra axiom to the
system (aka CHR rule, improvement rule).  For the example in question
we have:

class D a b | a - b where ...

the extra axiom is:

forall a b c. (D a b, D a c) = (b = c)

This is the definition of functional dependency---it specifies that
the relation 'D' is functional.  An improvement rule follows from a
functional dependency if it can be derived from this rule.  For
example, if we have an instance (i.e., another axiom):

instance D Char Bool

Then we can derive the following theorem:

(D Char a) = (a = Bool)

I think that in the CHR paper this was called instance improvement.
Note that this is not an extra axiom but rather a theorem---adding it
to the system as an axiom does not make the system any more
expressive.  Now consider what happens when we have a qualified
instance:

instance D a a = D [a] [a]

We can combine this with the FD axiom to get:

(D a a, D [a] b) = b = [a]

This is all that follows from the functional dependency.  Of course,
in the presence of other instances, we could obtain more improvement
rules.

As for the consistency rule, it is intended to ensure that instances
are consistent with the FD axiom.  As we saw from the previous
examples, it is a bit conservative in that it rejects some instances
that do not violate the functional dependency.   Now, we could choose
to exploit this fact to compute stronger improvement rules---nothing
wrong with that.  However, this goes beyond FDs.

-Iavor








  

 I'm simply following Mark Jones' style FDs.

 Mark's ESOP'00 paper has a consistency condition:
 If two instances match on the FD domain then the must also match on their
range.
 The motivation for this condition is to avoid inconsistencies when
 deriving improvement rules from instances.

 For




  

 class D a b | a - b

 instance D a a = D [a] [a]
 instance D [Int] Char


 we get

 D [a] b == b =[a]
 D [Int] b == b=Char

 In case of

 D [Int] b we therefore get b=Char *and* b =[a] which leads to a
(unification) error.
 The consistency condition avoids such situations.


 The beauty of formalism FDs with CHRs (or type functions/families) is that
 the whole improvement process becomes explicit. Of course, it has to match
 the programmer's intuition. See the discussion regarding multi-range FDs.

 Martin





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org

Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-18 Thread Martin Sulzmann

Lennart Augustsson wrote:

I've never thought of one being shorthand for the other, really.
Since they are logically equivalent (in my interpretation) I don't 
really care which one we regard as more primitive.

True. See my response to Iavor's recent email.

Martin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] [ANN] cabal-rpm 0.4

2008-04-18 Thread Bryan O'Sullivan
I've just uploaded version 0.4 of cabal-rpm to Hackage.  This is a
program that generates an RPM package from a Cabal package.  RPM is the
package format used by several major Linux distributions.

New in this version are support for GHC 6.8.2 and the Cabal 1.2 release
series.

Download:

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/cabal-rpm-0.4

Source:

darcs get http://darcs.serpentine.com/cabal-rpm

b
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-18 Thread Don Stewart
jsnow:
 A new version of my raytracer is out.  It now supports cones, cylinders, 
 disks, boxes, and planes as base primitives (previously it only 
 supported triangles and spheres), as well as transformations of 
 arbitrary objects (rotate, scale, translate) and the CSG operations 
 difference and intersection.  Perlin noise and Blinn highlights have 
 been added, as well.
 
 http://syn.cs.pdx.edu/~jsnow/glome/
 
 Glome can parse NFF-format scene files (see 
 http://tog.acm.org/resources/SPD/), but many features are only 
 accessible via raw Haskell, since NFF doesn't support very many kinds of 
 primitives.  I included a TestScene.hs file that demonstrates how to 
 create a scene with various kinds of geometry (including a crude attempt 
 at a recursively-defined oak tree) in haskell.  There isn't any 
 documentation yet, but many of the primitives have constructors that 
 resemble their equivalents in povray, so anyone familiar with povray's 
 syntax should be able to figure out what's going on.

Very impressive. Did you consider cabalising the Haskell code, so it 
can be easily distributed from hackage.haskell.org?

I note on the website you say:

no threading (shared-memory concurrency is not supported by ocaml,
in haskell it's buggy)

Could you elaborate on this? Shared memory concurrency is a sweet spot
in Haskell, and heavily utilised, so I think we'd all like to know more
details..

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] C++ interface with Haskell

2008-04-18 Thread Evan Laforge
To threadjack a little bit, I've been interfacing haskell with c++.
It gets awkward when the c++ structures use STL types like string and
vector.  Of course those are too complex for haskell to marshal to.

What I've been doing is defining an XMarshal variant of the X c++
class, that uses plain c arrays.  Then I marshal to that, and
construct the c++ object properly from XMarshal in the c-c++ wrapper
layer.  On a few occasions, when the c++ class is really big and only
has one STL member, I make a partially constructed c++ object, pass
the array separately, and then construct the proper c++ class from the
broken haskell generated one.  Possibly dangerous as all get-out
because I'm dealing with unconstructed c++ objects, but it seems to
work.

Passing back to haskell is easier since I can use *vec.begin(),
which according to the internet should be safe because STL guarantees
that vector contents are contiguous.

I'm only saved by the fact that I don't have that many different kinds
of classes to pass.  This would be much more drudgery if I had more.
Does anyone have a better solution or convention for marshalling c++
objects?


I've also noticed warnings from g++ about hsc2hs's use of the OFFSETOF
macro on c++ classes, but some googling of g++ mailing lists implied
that it's harmless if you don't have virtual bases, and what sane
person does, so I suppress it now :)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-18 Thread Sebastian Sylvan
On Fri, Apr 18, 2008 at 7:43 PM, Don Stewart [EMAIL PROTECTED] wrote:

 jsnow:
  A new version of my raytracer is out.  It now supports cones, cylinders,
  disks, boxes, and planes as base primitives (previously it only
  supported triangles and spheres), as well as transformations of
  arbitrary objects (rotate, scale, translate) and the CSG operations
  difference and intersection.  Perlin noise and Blinn highlights have
  been added, as well.
 
  http://syn.cs.pdx.edu/~jsnow/glome/http://syn.cs.pdx.edu/%7Ejsnow/glome/
 
  Glome can parse NFF-format scene files (see
  http://tog.acm.org/resources/SPD/), but many features are only
  accessible via raw Haskell, since NFF doesn't support very many kinds of
  primitives.  I included a TestScene.hs file that demonstrates how to
  create a scene with various kinds of geometry (including a crude attempt
  at a recursively-defined oak tree) in haskell.  There isn't any
  documentation yet, but many of the primitives have constructors that
  resemble their equivalents in povray, so anyone familiar with povray's
  syntax should be able to figure out what's going on.

 Very impressive. Did you consider cabalising the Haskell code, so it
 can be easily distributed from hackage.haskell.org?

 I note on the website you say:

no threading (shared-memory concurrency is not supported by ocaml,
in haskell it's buggy)

 Could you elaborate on this? Shared memory concurrency is a sweet spot
 in Haskell, and heavily utilised, so I think we'd all like to know more
 details..


Not sure what you need shared memory concurrency for in this case as it
seems to be a straightforward parallelism problem (i.e. the different
threads would be different pixels, there is no sharing needed).

-- 
Sebastian Sylvan
+44(0)7857-300802
UIN: 44640862
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Wrong Answer Computing Graph Dominators

2008-04-18 Thread Dan Weston

Matthew Brecknell wrote:

Dan Weston wrote:

Here, any path means all paths, a logical conjunction:

and [True, True] = True
and [True  ] = True
and [  ] = True


Kim-Ee Yeoh wrote: 
Hate to nitpick, but what appears to be some kind of a 
limit in the opposite direction is a curious way of arguing 
that: and [] = True.


Surely one can also write

and [False, False] = False
and [False  ] = False
and [  ] = False ???


No. I think what Dan meant was that for all non-null
xs :: [Bool], it is clearly true that:

and (True:xs) == and xs  -- (1)

It therefore makes sense to define (1) to hold also
for empty lists, and since it is also true that:

and (True:[]) == True

We obtain:

and [] == True

Since we can't make any similar claim about the
conjuctions of lists beginning with False, there
is no reasonable argument to the contrary.


Also, (and I know none of this is original, but it's worth repeating...)

It is not just the definition of and at stake here. Logical 
propositions that extend painlessly to [] if (and [] == True) become 
inconsistent for [] if (and [] == False) and would have to be checked in 
program calculation.


For instance, in propositional logic, you can prove (using Composition, 
Distribution[2], Material Implication) that for nonnull ys = 
[y0,y1,..,yn], implying everthing implies each thing:


x - (y0  y1  ... yn)
 --
(x - y0)  (x - y1)  ...  (x - yn)

Writing this in Haskell and using the fact that x - y means (not x || 
y), this says that


not x || and ys == and (map (not x ||) ys)

or in pointfree notation:

f . and == and . map f
  where f = (not x ||)

This should look familiar to origamists everywhere. and can be defined 
in terms of foldr iff (and [] == True) [Try it!].


Why is this important?

If and is defined with foldr, then the above can be proven for all 
well-typed f, and for f = (not x ||) in particular, even if ys is null. 
The law is painlessly extended to cover the null case automatically (and 
is therefore consistent):


LHS:  not x || (and []) == not x || True == True
RHS:  and (map (not x ||) []) == and []  == True
  Therefore True |- True, an instance of x |- x

If (and [] == False), then the law becomes inconsistent:

LHS:  not x || (and []) == not x || False == not x
RHS:  and (map (not x ||) []) == and [] == False
  Since not x == False, then x == True
  Therefore, True |- x == -| x (everything is derivable)

so we would have to exclude the null case for this law (and many 
others). Uck! Better stick with (and [] == True)


Naturally, similar reasoning justifies (or [] == False).
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-18 Thread Lennart Augustsson
BTW, here's a non-contrived example.  It's pretty easy to come up with
examples when you try to use type classes instead of a proper module system.

Here we have expressions parametrized over how identifiers and literals are
represented.  First a simple instance, and then one where all the types are
parametrized over the string representation.  These are the plug-and-play
type of things I'd like to be able to do.

class IsExpr expr id lit | expr - id lit where
eId :: id - expr
eLit :: lit - expr
eApply :: expr - expr - expr

data SimpleExpr = SId Char | SLit Int | SApply SimpleExpr SimpleExpr

instance IsExpr SimpleExpr Char Int where
eId = SId
eLit = SLit
eApply = SApply

data FancyExpr str = FId (Id str) | FLit (Lit str) | FApply (FancyExpr str)
(FancyExpr str)

data Id str = Id str
data Lit str = LString str | LInt Int

instance IsExpr (FancyExpr str) (Id str) (Lit str) where
eId = FId
eLit = FLit
eApply = FApply


On Fri, Apr 18, 2008 at 9:26 AM, Martin Sulzmann [EMAIL PROTECTED]
wrote:

 Lennart Augustsson wrote:

  To reuse a favorite word, I think that any implementation that
  distinguishes 'a - b, a - c' from 'a - b c' is broken. :)
  It does not implement FD, but something else.  Maybe this something else
  is useful, but if one of the forms is strictly more powerful than the other
  then I don't see why you would ever want the less powerful one.
 
   Do you have any good examples, besides the contrived one

 class D a b c | a - b c

 instance D a b b = D [a] [b] [b]

 where we want to have the more powerful form of multi-range FDs?

 Fixing the problem who mention is easy. After all, we know how to derive
 improvement for multi-range FDs. But it seems harder to find agreement
 whether
 multi-range FDs are short-hands for single-range FDs, or
 certain single-range FDs, eg a - b and a - c, are shorthands for more
 powerful
 multi-range FDs a - b c.
 I clearly prefer the latter, ie have a more powerful form of FDs.

 Martin


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] C++ interface with Haskell

2008-04-18 Thread Don Stewart
qdunkan:
 To threadjack a little bit, I've been interfacing haskell with c++.
 It gets awkward when the c++ structures use STL types like string and
 vector.  Of course those are too complex for haskell to marshal to.
 
 What I've been doing is defining an XMarshal variant of the X c++
 class, that uses plain c arrays.  Then I marshal to that, and
 construct the c++ object properly from XMarshal in the c-c++ wrapper
 layer.  On a few occasions, when the c++ class is really big and only
 has one STL member, I make a partially constructed c++ object, pass
 the array separately, and then construct the proper c++ class from the
 broken haskell generated one.  Possibly dangerous as all get-out
 because I'm dealing with unconstructed c++ objects, but it seems to
 work.
 
 Passing back to haskell is easier since I can use *vec.begin(),
 which according to the internet should be safe because STL guarantees
 that vector contents are contiguous.
 
 I'm only saved by the fact that I don't have that many different kinds
 of classes to pass.  This would be much more drudgery if I had more.
 Does anyone have a better solution or convention for marshalling c++
 objects?
 
 
 I've also noticed warnings from g++ about hsc2hs's use of the OFFSETOF
 macro on c++ classes, but some googling of g++ mailing lists implied
 that it's harmless if you don't have virtual bases, and what sane
 person does, so I suppress it now :)

Would someone like to summarise the current approaches
to combining Haskell  C++ on the Haskell wiki, even if just in bullet
points?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-18 Thread Jim Snow

Don Stewart wrote:

jsnow:
  

A new version of my raytracer is out. ...



Very impressive. Did you consider cabalising the Haskell code, so it 
can be easily distributed from hackage.haskell.org?


I note on the website you say:

no threading (shared-memory concurrency is not supported by ocaml,
in haskell it's buggy)

Could you elaborate on this? Shared memory concurrency is a sweet spot
in Haskell, and heavily utilised, so I think we'd all like to know more
details..

-- Don
  

The concurrency bug has to do with excessive memory use, and was discussed 
earlier here on the mailing list (search for Glome).
http://hackage.haskell.org/trac/ghc/ticket/2185


The other problem I had with concurrency is that I was getting about a 
50% speedup instead of the 99% or so that I'd expect on two cores.  I 
figured I'm probably doing something wrong.


I don't have any objection to using cabal, I just haven't gone to the 
trouble to figure it out yet.  Maybe in the next release.



Sebastian Sylvan wrote:
Not sure what you need shared memory concurrency for in this case as 
it seems to be a straightforward parallelism problem (i.e. the 
different threads would be different pixels, there is no sharing needed).


The scene is shared between threads.  (Complex scenes can be quite 
large.)  I'm assuming this is handled as a read-only shared memory 
region or something similar, as one might expect, and is not actually 
copied.


-jim
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-18 Thread Bulat Ziganshin
Hello Jim,

Saturday, April 19, 2008, 12:10:23 AM, you wrote:

 The other problem I had with concurrency is that I was getting about a
 50% speedup instead of the 99% or so that I'd expect on two cores.  I 

2 cores doesn't guarantee 2x speedup. some programs are limited by
memory access speed and you still have just one memory :)

-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HTTP and file upload

2008-04-18 Thread Adam Smyczek

Thanks for the snippet.
Sorry, but my question was somehow mis-formulated. I was looking for  
a client-side implementation
how to upload a file to any server using Haskell (mainly using the  
Browser module from HTTP package).
Going through the Browser.hs source code a little, I and came up with  
the following implementation

and your hpaste helped me to test it.

The following code is just a small wrapper around the Browser module  
that adds support for
multipart/form-data content type. It's more or less a prototype but  
works fine for me.

Looking forward to suggestions how to improve it.
Be gentle, it's beginner code :)

Adam


 
-

-- |
-- Wrapper around Network.Browser module with
-- support for multipart/form-data content type
--
 
-

module ReviewBoard.Browser (

formToRequest,
FormVar(..),
Form(..)

) where

import qualified Network.Browser as HB
import Network.HTTP
import Network.URI
import Data.Char
import Control.Monad.Writer
import System.Random

-- | Form to request for typed form variables
--
formToRequest :: Form - HB.BrowserAction Request
formToRequest (Form m u vs)
-- Use multipart/form-data content type when
-- the form contains at least one FileUpload variable
| or (map isFileUpload vs) = do
bnd - HB.ioAction mkBoundary
(_, enc) - HB.ioAction $ runWriterT $  
multipartUrlEncodeVars bnd vs

let body = concat enc
return Request
{ rqMethod=POST
, rqHeaders=
[ Header HdrContentType $ multipart/form-data;  
boundary= ++ bnd,

  Header HdrContentLength (show . length $ body) ]
, rqBody= body
, rqURI=u }

-- Otherwise fall back to Network.Browser
| otherwise = return $ HB.formToRequest (HB.Form m u $ map  
toHVar vs)


where
-- Convert typed variables to Network.Browser variables
toHVar (TextField n v)  = (n, v)
toHVar (FileUpload n f) = (n, f)
toHVar (Checkbox n v)   = (n, map toLower $ show v)

-- Is file upload
isFileUpload (FileUpload _ _) = True
isFileUpload _= False

-- Create random boundary string
mkBoundary = do
rand - randomRIO (1000 :: Integer, )
return $  ++ show rand

-- | Encode variables, add boundary header and footer
--
multipartUrlEncodeVars :: String - [FormVar] - RqsWriter ()
multipartUrlEncodeVars bnd vs = do
mapM (\v - tell [--, bnd, \r\n]  encodeVar v) vs
tell [--, bnd, --, \r\n]

-- | Encode variable based on type
--
encodeVar :: FormVar - RqsWriter ()
encodeVar (TextField n v)= defaultEncVar n v
encodeVar (Checkbox n True)  = defaultEncVar n true
encodeVar (Checkbox n False) = defaultEncVar n false
encodeVar (FileUpload n f)   = do
fc - liftIO $ readFile f
tell [ Content-Disposition: form-data; name=\, n, \;  
filename=\, f, \\r\n
 , Content-Type: text/plain\r\n -- TODO: add support for  
different types

 , \r\n , fc , \r\n]

-- | Default encode method for name/value as string
--
defaultEncVar :: String - String - RqsWriter ()
defaultEncVar n v = tell [ Content-Disposition: form-data; name=\,  
n, \\r\n

 , \r\n , v , \r\n]

--  
 
---

-- Types

-- | Request writer
--
type RqsWriter a = WriterT [String] IO a

-- | Typed form vars
--
data FormVar
= TextField  String String
| FileUpload String FilePath
| Checkbox   String Bool
deriving Show

-- | And the typed form
--
data Form = Form RequestMethod URI [FormVar]




On Apr 15, 2008, at 1:38 AM, Adrian Neumann wrote:


Yes

http://hpaste.org/6990

Am 14.04.2008 um 19:07 schrieb Adam Smyczek:

Is form based file upload supported in HTTP module (HTTP-3001.0.4)?

Adam


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-18 Thread David Roundy
On Sat, Apr 19, 2008 at 12:19:19AM +0400, Bulat Ziganshin wrote:
 Saturday, April 19, 2008, 12:10:23 AM, you wrote:
  The other problem I had with concurrency is that I was getting about a
  50% speedup instead of the 99% or so that I'd expect on two cores.  I 
 
 2 cores doesn't guarantee 2x speedup. some programs are limited by
 memory access speed and you still have just one memory :)

In fact, this is relatively easily tested (albeit crudely):  just run two
copies of your single-threaded program at the same time.  If they take
longer than when run one at a time, you can guess that you're
memory-limited, and you won't get such good performance from threading your
code.  But this is only a crude hint, since memory performance is strongly
dependent on cache behavior, and running one threaded job may either do
better or worse than two single-threaded jobs.  If you've got two separate CPUs
with two separate caches, the simultaneous single-threaded jobs should beat the
threaded job (meaning take less than twice as long), since each job should
have full access to one cache.  If you've got two cores sharing a single
cache, the behavior may be the opposite:  the threaded job uses less total
memory than the two single-threaded jobs, so more of the data may stay in
cache.

For reference, on a friend's dual quad-core Intel system (i.e. 8 cores
total), if he runs 8 simultaneous (identical) memory-intensive job he only
gets about five times the throughput of a job, meaning that each core is
running at something like 60% of it's CPU capacity due to memory
contention.  It's possible that your system is comparably limited, although
I'd be suprised, somehow it seems unlikely that your ray tracer is
stressing the cache all that much.
-- 
David Roundy
Department of Physics
Oregon State University
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-18 Thread Jim Snow

David Roundy wrote:

On Sat, Apr 19, 2008 at 12:19:19AM +0400, Bulat Ziganshin wrote:
  

Saturday, April 19, 2008, 12:10:23 AM, you wrote:


The other problem I had with concurrency is that I was getting about a
50% speedup instead of the 99% or so that I'd expect on two cores.  I 
  

2 cores doesn't guarantee 2x speedup. some programs are limited by
memory access speed and you still have just one memory :)



In fact, this is relatively easily tested (albeit crudely):  just run two
copies of your single-threaded program at the same time.  If they take
longer than when run one at a time, you can guess that you're
memory-limited, and you won't get such good performance from threading your
code.  But this is only a crude hint, since memory performance is strongly
dependent on cache behavior, and running one threaded job may either do
better or worse than two single-threaded jobs.  If you've got two separate CPUs
with two separate caches, the simultaneous single-threaded jobs should beat the
threaded job (meaning take less than twice as long), since each job should
have full access to one cache.  If you've got two cores sharing a single
cache, the behavior may be the opposite:  the threaded job uses less total
memory than the two single-threaded jobs, so more of the data may stay in
cache.

For reference, on a friend's dual quad-core Intel system (i.e. 8 cores
total), if he runs 8 simultaneous (identical) memory-intensive job he only
gets about five times the throughput of a job, meaning that each core is
running at something like 60% of it's CPU capacity due to memory
contention.  It's possible that your system is comparably limited, although
I'd be suprised, somehow it seems unlikely that your ray tracer is
stressing the cache all that much.
  

On a particular scene with one instance of the single-threaded renderer
running, it takes about 19 seconds to render an image.  With two
instances running, they each take about 23 seconds.  This is on an
Athlon-64 3800+ dual core, with 512kB of L2 cache per core.  So, it
seems my memory really is slowing things down noticeably.

-jim

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-18 Thread Sebastian Sylvan
On Fri, Apr 18, 2008 at 9:10 PM, Jim Snow [EMAIL PROTECTED] wrote:

 The scene is shared between threads.  (Complex scenes can be quite large.)
  I'm assuming this is handled as a read-only shared memory region or
 something similar, as one might expect, and is not actually copied.


Ah, when people say shared memory concurrency, they usually mean shared
*mutable* memory concurrency, which this isn't.


-- 
Sebastian Sylvan
+44(0)7857-300802
UIN: 44640862
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: RFC: A standardized interface between web servers and applications or frameworks (ala WSGI)

2008-04-18 Thread Johan Tibell
First, apologies for not responding earlier.  I spent my week at a
conference in Austria.  Second, thanks for all the feedback!

I thought I go through some of my thoughts on the issues raised.  Just
to try to reiterate the goals of this effort:

* To provide a common, no frills interface between web servers and
  applications or frameworks to increase choice for application
  developers.

* To make that interface easy enough to implement so current web
  servers and frameworks will implement it.  This is crucial for it
  being adopted.

* Avoid design decisions that would limit the number of frameworks
  that can use the interface.  One example of a limiting decisions
  would be one that limits the maximal possible performance by using
  e.g. inefficient data types.

I'll try to start with what seems to be the easier issues.

sendfile(2) support
===

I would like see this supported in the interface.  I didn't include it
in the first draft as I didn't have a good idea of where to put it.
One idea would be to add the following field to the Environment
record:

sendfile :: Maybe (FD - IO ())

Possibly with additional parameters as needed.  The reason that
sendfile needs to be included in the environment instead of just a
binding to the C function is that the Socket used for the connection
is hidden from the application side and its use is abstracted by the
input and output enumerators.

The other suggested solution (to return either an Enumerator or a file
descriptor) might work better.  I just wanted to communicate that I
think it should be included.

Extension HTTP methods
==

I did have extension methods in mind when I wrote the draft but didn't
include it.  I see two possible options.

1. Change the HTTP method enumeration to:

data Method = Get | ... | ExtensionMethod ByteString

2. Treat all methods as bytestrings:

type Method = ByteString

This treatment touches on the discussion on typing further down in
this email.  I still haven't thought enough about the consequences (if
indeed there are any of any importance) of the two approaches.

The Enumerator type
===

To recap, I proposed the following type for the Enumerator abstraction:

type Enumerator = forall a. (a - ByteString - IO (Either a a)) - a - IO a

The IO monad is a must both in the return type of the Enumerator
function and in the iteratee function (i.e. the first parameter of the
enumartor).  IO in the return type of the enumerator is a must since
the server must perform I/O (i.e. reading from the client socket) to
provide the input and the application might need to perform I/O to
create the response.  The appearance of the IO monad in the iteratee
functions is an optimization.  It makes it possible for the server or
application to act immediately when a chunk of data is received.  This
saves memory when large files are being sent as they can be written to
disk/network immediately instead of being cached in memory.

There are some different design (and possibly performance trade-offs)
that could be made.  The current enumerator type can be viewed as an
unrolled State monad suggesting that it would be possible to change
the type to:

type Enumerator = forall t. MonadTrans t = (ByteString - t IO
(Either a a)) - t IO a

which is a more general type allowing for an arbitrary monad stack.
Some arguments against doing this:

* The unrolled state version is analogous to a left fold (and can
  indeed be seen as one) and should thus be familiar to all Haskell
  programmers.

* A, possibly unfounded, worry I have is that it might be hard to
  optimize way the extra abstraction layer putting a performance tax
  on all applications, whether they use the extra flexibility or not.

It would be great if any of the Takusen authors (or Oleg since he
wrote the enumerator paper) could comment on this.

Note: I haven't thought this one through.  It was
suggested to me on #haskell and I thought I should at least bring it
up.

Extra environment variables
===

I've intended all along to include a field for remaining, optional
extra pieces of information taken from e.g. the web server, the shell,
etc.  I haven't come up with an good name for this field by the idea
is to add another field to the Environment:

data Environment = Environment
{ ...
, extraEnvironment :: [(ByteString, ByteString)]
}

Typing and data types
=

Most discussions seem to, perhaps unsurprisingly, have centered around
the use of data types and typing in general.  Let me start by giving
an assumptions I've used when writing this draft:

Existing frameworks already have internal representations of the
request URL, headers, etc.  Changing these would be costly.  Even if
this was done I don't think it is possible to pick any one type that
all frameworks could use to represent an HTTP requests or even parts
of a request.  Different frameworks need different types.  Let me as
an example use 

Re: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-18 Thread David Roundy
On Fri, Apr 18, 2008 at 02:09:28PM -0700, Jim Snow wrote:
 On a particular scene with one instance of the single-threaded renderer
 running, it takes about 19 seconds to render an image.  With two
 instances running, they each take about 23 seconds.  This is on an
 Athlon-64 3800+ dual core, with 512kB of L2 cache per core.  So, it
 seems my memory really is slowing things down noticeably.

This doesn't mean there's no hope, it just means that you'll need to be
extra-clever if you're to get a speedup that is close to optimal.  The key
to overcoming memory bandwidth issues is to think about cache use and
figure out how to improve it.  For instance, O(N^3) matrix multiplication
can be done in close to O(N^2) time provided it's memory-limited, by
blocking memory accesses so that you access less memory at once.

In the case of ray-tracing I've little idea where or how you could improve
memory access patterns, but this is what you should be thinking about (if
you want to optimize this code).  Of course, improving overall scaling is
best (e.g. avoiding using lists if you need random access).  Next I'd ask
if there are more efficent or compact data structures that you could be
using.  If your program uses less memory, a greater fraction of that memory
will fit into cache.  Switching to stricter data structures and turning on
-funbox-strict-fields (or whatever it's called) may help localize your
memory access.  Even better if you could manage to use unboxed arrays, so
that your memory accesses really would be localized (assuming that you
actually do have localize memory-access patterns).

Of course, also ask yourself how much memory your program is using in
total.  If it's not much more than 512kB, for instance, we may have
misdiagnosed your problem.
-- 
David Roundy
Department of Physics
Oregon State University
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-18 Thread Bryan O'Sullivan
Jim Snow wrote:

 The concurrency bug has to do with excessive memory use, and was
 discussed earlier here on the mailing list (search for Glome).
 http://hackage.haskell.org/trac/ghc/ticket/2185

Interesting.  I looked at your test case.  I can reproduce your problem
when I build with the threaded runtime and run with a single core, but
not if I use +RTS -N2.  Did you overlook the possibility that you may
not have told GHC how many cores to use?

Also, your code is sprinkled with many more strictness annotations than
it needs.

b
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-18 Thread Jim Snow

David Roundy wrote:

On Fri, Apr 18, 2008 at 02:09:28PM -0700, Jim Snow wrote:
  

On a particular scene with one instance of the single-threaded renderer
running, it takes about 19 seconds to render an image.  With two
instances running, they each take about 23 seconds.  This is on an
Athlon-64 3800+ dual core, with 512kB of L2 cache per core.  So, it
seems my memory really is slowing things down noticeably.



This doesn't mean there's no hope, it just means that you'll need to be
extra-clever if you're to get a speedup that is close to optimal.  The key
to overcoming memory bandwidth issues is to think about cache use and
figure out how to improve it.  For instance, O(N^3) matrix multiplication
can be done in close to O(N^2) time provided it's memory-limited, by
blocking memory accesses so that you access less memory at once.

In the case of ray-tracing I've little idea where or how you could improve
memory access patterns, but this is what you should be thinking about (if
you want to optimize this code).  Of course, improving overall scaling is
best (e.g. avoiding using lists if you need random access).  Next I'd ask
if there are more efficent or compact data structures that you could be
using.  If your program uses less memory, a greater fraction of that memory
will fit into cache.  Switching to stricter data structures and turning on
-funbox-strict-fields (or whatever it's called) may help localize your
memory access.  Even better if you could manage to use unboxed arrays, so
that your memory accesses really would be localized (assuming that you
actually do have localize memory-access patterns).

Of course, also ask yourself how much memory your program is using in
total.  If it's not much more than 512kB, for instance, we may have
misdiagnosed your problem.
  
Interestingly, switching between Float and Double doesn't make any 
noticeable difference in speed (though I see more rendering artifacts 
with Float).  Transformation matrices are memory hogs, at 24 floats each 
(a 4x4 matrix and its inverse with the bottom rows omitted (they're 
always 0 0 0 1)).  This may be one reason why many real-time ray tracers 
just stick with triangles; a triangle can be transformed by transforming 
its verticies, and then you can throw the matrix away.


There are a lot of tricks for making ray tracers more memory-coherent.  
You can trace packets of rays instead of single rays against whatever 
acceleration structure you may be using.  Kd-tree nodes can be compacted 
to fit in a single cacheline if you arrange the tree in memory in a 
particular way that allows you to omit some of the pointers.  (I use BIH 
trees, but the same ideas probably apply.)  A lot of these sorts of 
tricks make the resulting code more complex and/or uglier.


Useful references: What Every Programmer Needs to Know About Memory 
http://lwn.net/Articles/250967/
Siggraph presentation on optimizing ray tracers (warning: ppt) 
http://www.openrt.de/Siggraph05/UpdatedCourseNotes/Stoll_Realtime.ppt


Bryan O'Sullivan wrote:

Jim Snow wrote:

  

 The concurrency bug has to do with excessive memory use, and was
 discussed earlier here on the mailing list (search for Glome).
 http://hackage.haskell.org/trac/ghc/ticket/2185



Interesting.  I looked at your test case.  I can reproduce your problem
when I build with the threaded runtime and run with a single core, but
not if I use +RTS -N2.  Did you overlook the possibility that you may
not have told GHC how many cores to use?

  
I just tested it again.  Memory usage behaves differently depending on 
how many cores I tell it to run on, but it always used the least memory 
when I compiled without threading support.  With -N1 memory usage grows 
faster than -N2, but memory is smaller and doesn't grow larger with each 
re-render (except by a very small amount) if I don't use parmap.

Also, your code is sprinkled with many more strictness annotations than
it needs.

b
  
That doesn't surprise me.  I haven't really figured out why somethings 
are faster strict or not strict, or where it doesn't matter or the 
annotations are redundant.


-jim

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-18 Thread Don Stewart
jsnow:
 David Roundy wrote:
 On Fri, Apr 18, 2008 at 02:09:28PM -0700, Jim Snow wrote:
   
 On a particular scene with one instance of the single-threaded renderer
 running, it takes about 19 seconds to render an image.  With two
 instances running, they each take about 23 seconds.  This is on an
 Athlon-64 3800+ dual core, with 512kB of L2 cache per core.  So, it
 seems my memory really is slowing things down noticeably.
 
 
 This doesn't mean there's no hope, it just means that you'll need to be
 extra-clever if you're to get a speedup that is close to optimal.  The key
 to overcoming memory bandwidth issues is to think about cache use and
 figure out how to improve it.  For instance, O(N^3) matrix multiplication
 can be done in close to O(N^2) time provided it's memory-limited, by
 blocking memory accesses so that you access less memory at once.
 
 In the case of ray-tracing I've little idea where or how you could improve
 memory access patterns, but this is what you should be thinking about (if
 you want to optimize this code).  Of course, improving overall scaling is
 best (e.g. avoiding using lists if you need random access).  Next I'd ask
 if there are more efficent or compact data structures that you could be
 using.  If your program uses less memory, a greater fraction of that 
 memory
 will fit into cache.  Switching to stricter data structures and turning on
 -funbox-strict-fields (or whatever it's called) may help localize your
 memory access.  Even better if you could manage to use unboxed arrays, so
 that your memory accesses really would be localized (assuming that you
 actually do have localize memory-access patterns).
 
 Of course, also ask yourself how much memory your program is using in
 total.  If it's not much more than 512kB, for instance, we may have
 misdiagnosed your problem.
   
 Interestingly, switching between Float and Double doesn't make any 
 noticeable difference in speed (though I see more rendering artifacts 
 with Float).  Transformation matrices are memory hogs, at 24 floats each 
 (a 4x4 matrix and its inverse with the bottom rows omitted (they're 
 always 0 0 0 1)).  This may be one reason why many real-time ray tracers 
 just stick with triangles; a triangle can be transformed by transforming 
 its verticies, and then you can throw the matrix away.

The only differences I'd expect to see here would
be with -fvia-C -fexcess-precision -O2 -optc-O2

which might trigger some SSE stuff from the C compiler
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] I hate Haskell's typeclasses

2008-04-18 Thread Ryan Ingram
WARNING: RANT AHEAD.  Hopefully this fires off some productive
discussion on how to fix these problems!

Don't get me wrong:  I think the idea of typeclasses is great.  Their
implementation in Haskell comes so close to being awesome and then
falls short, and that's almost worse than not being awesome in the
first place!

Some examples of things I think you should be able to do, that just Do
Not Work.  Examples like these are trivial in many other languages,
and they shouldn't be that hard here, either!

1) You can't make sensible default implementations.  For example, it'd
be nice to make all my Monads be Applicatives and Functors without
resorting to Template Haskell or infinite boilerplate.  Why can't I
just write this?

instance Monad m = Applicative m where
pure = return
(*) = ap

Sure, I get that there might be ambiguity of which instance to choose.
 But why not warn me about that ambiguity, or let me choose somehow on
a case-by-case basis when it happens?

2) You can't add sensible superclasses.  I was playing with QuickCheck
and wanted to write equal with regards to testing.  So I wrote up a
class for it:

class TestableEq a where
(~=) :: a - a - Property

instance Eq a = TestableEq a where
-- should be a superclass of Eq instead!
a ~= b = a == b

instance (Arbitrary a, TestableEq b) = TestableEq (a - b) where
f ~= g = forAll arbitrary (\a - f a ~= g a)

But this doesn't work without overlapping  undecidable instances!

Sure, there is an alternative: I could manually declare instances of
TestableEq for EVERY SINGLE TYPE that is an instance of Eq.  I am sure
nobody here would actually suggest that I do so.

And sure, these extensions are both safe here, because the intent is
that you won't declare instances of TestableEq for things that are
already instances of Eq, and you won't do something stupid like
instance TestableEq a = Eq a.

But why do I need to jump through these hoops for a perfectly safe 
commonly desired operation?

3) There's no reflection or ability to choose an implementation based
on other constraints.

In QuickCheck, (a - b) is an instance of Arbitrary for appropriate a,
b.  But you can't use this instance in forAll or for testing functions
without being an instance of Show.  Now, this is probably a design
mistake, but it's the right choice with the current typeclass system
(see (2)).  But it'd be a million times better to have something like
the following:

class Arbitrary a = MkArbitrary a where
   mkArbitrary :: Gen (a, String)

case instance MkArbitrary a where
   Show a =
   mkArbitrary = do
   x - arbitrary
   return (x, show x)
   otherwise =
   mkArbitrary = do
   st - getGenState
   x - arbitrary
   return (x, evalGen arbitrary  ++ show st)

With this, QuickCheck could print reproducible test cases painlessly
without adding the requirement that everything is an instance of Show!

Now, you could say that mkArbitrary should be a member function of
Arbitrary, but then you clutter up your instance definitions with tons
of mkArbitrary = defaultMkArbitrary for types that have a Show
instance.

4) Every concrete type should be an instance of Typeable without
having to do anything, and Typeable should give you typecase 
reflection:

genericShow :: Typeable a = a - String
genericShow x = typecase x of
String - x
(Show t = t) - show x -- any instance of Show
_ - unknown

  -- ryan

P.S. I'd actually love to work on any or all of these problems, but I
can't get GHC to compile!  See http://hpaste.org/5878
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] I hate Haskell's typeclasses

2008-04-18 Thread Jonathan Cast

On 18 Apr 2008, at 9:29 PM, Ryan Ingram wrote:

WARNING: RANT AHEAD.


WARNING: RESPONSE IN THE SPIRIT OF THE ORIGINAL AHEAD.


  Hopefully this fires off some productive
discussion on how to fix these problems!


{-# GHC_OPTIONS -foverlapping-instances -fundecidable-instances #-} :)

What you want to work is precisely what this allows.


Don't get me wrong:  I think the idea of typeclasses is great.  Their
implementation in Haskell comes so close to being awesome and then
falls short, and that's almost worse than not being awesome in the
first place!


We've noticed.  The literature on extending Haskell type classes is,  
um, enormous.




Some examples of things I think you should be able to do, that just Do
Not Work.  Examples like these are trivial in many other languages,


I call.  Name a language that is

a) Completely statically typed (no type errors at runtime),
b) Has an ad-hoc overloading mechanism powerful enough to encode Num  
and Monad, and

c) Is substantially better than Haskell + extensions for your examples.

The examples aren't all that long; comparison code snippets shouldn't  
be all that long either.



and they shouldn't be that hard here, either!

1) You can't make sensible default implementations.  For example, it'd
be nice to make all my Monads be Applicatives and Functors without
resorting to Template Haskell or infinite boilerplate.  Why can't I
just write this?

instance Monad m = Applicative m where
pure = return
(*) = ap

Sure, I get that there might be ambiguity of which instance to choose.
 But why not warn me about that ambiguity, or let me choose somehow on
a case-by-case basis when it happens?


You can already choose on a case-by-case basis.  In this specific  
case, you can only think of one super-instance, but I can think of  
another:


instance Arrow a = Applicative (a alpha) where
  pure = arr . const
  a * b = (a  b)  arr ($)

I think Conal Elliot's recent work of FRP can be extended to show  
that Fudgets-style stream processors can be made instances of  
Applicative by both these methods, with different instances.  So as  
soon as both are present, you have to choose the instance you want  
every time.  Having something like this spring up and bite you  
because of a change in some library you pulled off of Haddock does  
not make for maintainable code.


More generally, specifying what you want is really not hard.  Do you  
really have gazillions of monads in your code you have to repeat this  
implementation for?



2) You can't add sensible superclasses.  I was playing with QuickCheck
and wanted to write equal with regards to testing.  So I wrote up a
class for it:

class TestableEq a where
(~=) :: a - a - Property

instance Eq a = TestableEq a where
-- should be a superclass of Eq instead!
a ~= b = a == b


Again, this is one (*) line per type.  How many types do you declare?


instance (Arbitrary a, TestableEq b) = TestableEq (a - b) where
f ~= g = forAll arbitrary (\a - f a ~= g a)

But this doesn't work without overlapping  undecidable instances!

Sure, there is an alternative: I could manually declare instances of
TestableEq for EVERY SINGLE TYPE that is an instance of Eq.  I am sure
nobody here would actually suggest that I do so.


Bzzzt.  Wrong.  Thanks for playing!


And sure, these extensions are both safe here, because the intent


What?  By that reasoning, perl is `safe'.  Haskell is not perl.


is
that you won't declare instances of TestableEq for things that are
already instances of Eq, and you won't do something stupid like
instance TestableEq a = Eq a.

But why do I need to jump through these hoops for a perfectly safe 
commonly desired operation?


It's called a proof obligation.  Haskell is not here to stop you from  
jumping through hoops.  In fact, it is here precisely to force you to  
jump through hoops.  That's why it's called a bondage and discipline  
language.



3) There's no reflection or ability to choose an implementation based
on other constraints.

In QuickCheck, (a - b) is an instance of Arbitrary for appropriate a,
b.  But you can't use this instance in forAll or for testing functions
without being an instance of Show.  Now, this is probably a design
mistake, but it's the right choice with the current typeclass system
(see (2)).  But it'd be a million times better to have something like
the following:

class Arbitrary a = MkArbitrary a where
   mkArbitrary :: Gen (a, String)

case instance MkArbitrary a where
   Show a =
   mkArbitrary = do
   x - arbitrary
   return (x, show x)
   otherwise =
   mkArbitrary = do
   st - getGenState
   x - arbitrary
   return (x, evalGen arbitrary  ++ show st)


So we compile in a table of every instance and every datatype, add a  
Typeable constraint to forAll (since parametricity just got shot to  
heck), and scan through that table on every test.  Millions of times  
better.  And slower.  And more likely to develop