[GHC] #744: ghc-pkg lies about location of haddock-interfaces and haddock-html

2006-04-13 Thread GHC
#744: ghc-pkg lies about location of haddock-interfaces and haddock-html
--+-
Reporter:  [EMAIL PROTECTED]  |Owner: 
Type:  bug|   Status:  new
Priority:  normal |Milestone: 
   Component:  Documentation  |  Version:  6.4.1  
Severity:  minor  | Keywords: 
  Os:  Linux  |   Difficulty:  Easy (1 hr)
Architecture:  x86|  
--+-
I installed ghc from ghc-6.4.1-1.i386.rpm. This places the haddock
 interfaces and haddock-documenation into
 /usr/share/doc/ghc-6.4.1/libraries. However

  ghc-pkg field base haddock-interfaces
  /usr/share/ghc-6.4.1/html/libraries/base/base.haddock

   which is wrong. (had to modify package.conf by hand)

 Cheers,
   Misha

 Additional info:
   SuSE 10
uname -a
   Linux avatar 2.6.13-15.8-default #1 Tue Feb 7 11:07:24 UTC 2006 i686
 i686 i386 GNU/Linux
ghc -v
 Glasgow Haskell Compiler, Version 6.4.1, for Haskell 98, compiled by GHC
 version 6.4.1
 Using package config file: /usr/lib/ghc-6.4.1/package.conf
 Using package config file: /home/avatar/.ghc/i386-linux-6.4.1/package.conf
 Hsc static flags: -static

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/744
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: 6.4.2.20060411 under solaris

2006-04-13 Thread Christian Maeder

Christian Maeder wrote:

RtsUtils.p_o
RtsUtils.c: In function 'time_str':
RtsUtils.c:190: error: too few arguments to function 'ctime_r'


I could carry on after adding an argument , 26

C.

-- RtsUtils.c  2006-04-13 09:09:49.778999000 +0200
+++ RtsUtils.c~ 2006-01-12 13:43:03.0 +0100
@@ -185,11 +185,11 @@
 static char nowstr[26];

 if (now == 0) {
time(now);
 #if HAVE_CTIME_R
-   ctime_r(now, nowstr, 26);
+   ctime_r(now, nowstr);
 #else
strcpy(nowstr, ctime(now));
 #endif
memmove(nowstr+16,nowstr+19,7);
nowstr[21] = '\0';  // removes the \n
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Possible runtime overhead of wrapping the IO monad?

2006-04-13 Thread Simon Peyton-Jones
Brian

I've committed a fix for this. By which I mean that you don't need to
write dropRenderM.  You can just use RenderM as if it were IO.

The change won't be in 6.4.2, but it's in the HEAD and will be in 6.6

Simon

| -Original Message-
| From: [EMAIL PROTECTED]
[mailto:glasgow-haskell-users-
| [EMAIL PROTECTED] On Behalf Of Brian Hulley
| Sent: 30 March 2006 03:50
| To: glasgow-haskell-users@haskell.org
| Subject: Re: Possible runtime overhead of wrapping the IO monad?
| 
| Brian Hulley wrote:
|  With -O2 enabled, __ccall_GC duma_vertex3f is indeed called directly
|  instead of vertex3f, from a different module, so that proves that
|  different monads can indeed be used to wrap IO operations without
any
|  performance penalty at all.
| 
| However I've just discovered there *is* a penalty for converting
between
| callback functions that return a different monad from the IO monad.
For
| example, if I have a RenderM monad that allows primitives to be drawn
to the
| screen, and a callback:
| 
|newtype RenderM a = RenderM (IO a) deriving (Functor, Monad,
MonadIO)
| 
|type RenderCallback = Int - Int - RenderM ()
| 
| where the intention is that the callback will take the width and
height of
| the window and return a RenderM action, the problem is that because
the FFI
| does not allow RenderM to appear in a foreign type, the actual render
| function has to be converted into a function which returns an IO
action
| instead of a RenderM action eg by:
| 
|type RenderCallbackIO = Int - Int - IO ()
| 
|dropRenderM :: RenderCallback - RenderCallbackIO
|dropRenderM f x y = let RenderM io = f x y in io
| 
|foreign import ccall duma_onRender :: FunPtr RenderCallbackIO
- IO
| ()
| 
|foreign import ccall wrapper mkRenderCallbackIO
| :: RenderCallbackIO - IO (FunPtr RenderCallbackIO)
| 
|onRender :: RenderCallback - IO ()
|onRender f = mkRenderCallbackIO (dropRenderM f) =
duma_onRender
| 
| With -O2 optimization, GHC does not seem to be able to optimize out
the call
| to dropRenderM even though this just changes the return value of f
from
| RenderM (IO a) to IO a, so RenderM is not transparent after all:
| 
| Duma.onRender = \ (f :: Duma.RenderCallback)
| (eta :: GHC.Prim.State# GHC.Prim.RealWorld) -
| case (# GHC.Prim.State# GHC.Prim.RealWorld, () #)
| Duma.mkRenderCallbackIO
|   (Duma.dropRenderM f) eta
| of wild { (# new_s, a86 #) -
| case (# GHC.Prim.State# GHC.Prim.RealWorld, () #) a86
| of ds { GHC.Ptr.FunPtr ds1 -
| case (# GHC.Prim.State# GHC.Prim.RealWorld,
|  () #) {__ccall_GC duma_onRender GHC.Prim.Addr#
|  - GHC.Prim.State# GHC.Prim.RealWorld
|  - (# GHC.Prim.State# GHC.Prim.RealWorld #)}
|   ds1 new_s
| of wild1 { (# ds2 #) -
| (# ds2, GHC.Base.() #)
| }
| }
| }
| 
| I must admit I'm not at all clear how to read the -ddump-simpl output
so I
| may have got this wrong, but since Duma.dropRenderM is mentioned, I
think
| this means this has not been optimized out.
| 
| Therefore there does seem to be an overhead for using different monads
at
| the moment (?)
| 
| Regards, Brian.
| 
| ___
| Glasgow-haskell-users mailing list
| Glasgow-haskell-users@haskell.org
| http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 6.4.2.20060411 under solaris

2006-04-13 Thread Christian Maeder

Christian Maeder wrote:

RtsUtils.c:190: error: too few arguments to function 'ctime_r'


I could carry on after adding an argument , 26


now I get an error when linking the stage2 compiler. How should I fix this?

Cheers Christian

/home/maeder/haskell/solaris/ghc-6.4.2.20060411/ghc/rts/libHSrts_thr.a(OSThreads
.thr_o): In function `yieldThread':
OSThreads.c:(.text+0x88): undefined reference to `sched_yield'
collect2: ld returned 1 exit status
ghc: 14133388 bytes, 3 GCs, 165404/165404 avg/max bytes residency (1 
samples),
 15M in use, 0.00 INIT (0.00 elapsed), 0.13 MUT (8.89 elapsed), 0.03 GC 
(0.06 el

apsed) :ghc
gmake[2]: *** [stage2/ghc-6.4.2.20060411] Error 1
gmake[2]: Leaving directory 
`/home/maeder/haskell/solaris/ghc-6.4.2.20060411/ghc

/compiler'
gmake[1]: *** [stage2] Error 2
gmake[1]: Leaving directory 
`/home/maeder/haskell/solaris/ghc-6.4.2.20060411'

gmake: *** [bootstrap2] Error 2
(
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 6.4.2.20060411 under solaris

2006-04-13 Thread Simon Marlow

Christian Maeder wrote:

Christian Maeder wrote:


RtsUtils.c:190: error: too few arguments to function 'ctime_r'



I could carry on after adding an argument , 26



now I get an error when linking the stage2 compiler. How should I fix this?

Cheers Christian

/home/maeder/haskell/solaris/ghc-6.4.2.20060411/ghc/rts/libHSrts_thr.a(OSThreads 


.thr_o): In function `yieldThread':
OSThreads.c:(.text+0x88): undefined reference to `sched_yield'
collect2: ld returned 1 exit status
ghc: 14133388 bytes, 3 GCs, 165404/165404 avg/max bytes residency (1 
samples),
 15M in use, 0.00 INIT (0.00 elapsed), 0.13 MUT (8.89 elapsed), 0.03 GC 
(0.06 el

apsed) :ghc
gmake[2]: *** [stage2/ghc-6.4.2.20060411] Error 1
gmake[2]: Leaving directory 
`/home/maeder/haskell/solaris/ghc-6.4.2.20060411/ghc

/compiler'
gmake[1]: *** [stage2] Error 2
gmake[1]: Leaving directory 
`/home/maeder/haskell/solaris/ghc-6.4.2.20060411'

gmake: *** [bootstrap2] Error 2
(


I've been rather busy today with Haskell' and ICFP reviewing, so I won't 
be able to do the 6.4.2 release until next week (probably Tuesday, 
Monday is a holiday in the UK).


If you have fixes for these, and get them to me before Monday, I *might* 
be able to get them into the release.  It's a bit late though.


The sched_yield() thing looks like some extra library needs to be linked 
in under Solaris for the threaded RTS.


Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 6.4.2.20060411 under solaris

2006-04-13 Thread Christian Maeder

Christian Maeder wrote:

OSThreads.c:(.text+0x88): undefined reference to `sched_yield'
collect2: ld returned 1 exit status


I could fix this by adding rt to the extra-libraries of the rts 
package.conf file.


Now I have a stage2 compiler but gmake binary-dist does not work. I 
assume a couple of variables are not set up. What is going there? The 
Makefile has the line:


BIN_DIST_DIRS=$($(Project)BinDistDirs)

where I don't find BinDistDirs

Cheers Christian

--- ghc/rts/package.conf.inplaceThu Apr 13 15:49:49 2006
+++ ghc/rts/package.conf.inplace~   Wed Apr 12 19:44:55 2006
@@ -429,7 +429,6 @@

 extra-libraries:   m
  , gmp
-  , rt
  , dl


-bash-3.00$ gmake binary-dist Project=Ghc
rm -rf /home/maeder/haskell/solaris/ghc-6.4.2.20060411/-
rm -f /home/maeder/haskell/solaris/ghc-6.4.2.20060411/-.tar.gz
echo BIN_DIST_DIRS = 
BIN_DIST_DIRS =
/bin/sh: syntax error at line 1: `;' unexpected
gmake: *** [binary-dist-pre] Error 2
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 6.4.2.20060411 under solaris

2006-04-13 Thread Simon Marlow

Christian Maeder wrote:

Christian Maeder wrote:


OSThreads.c:(.text+0x88): undefined reference to `sched_yield'
collect2: ld returned 1 exit status



I could fix this by adding rt to the extra-libraries of the rts 
package.conf file.


Now I have a stage2 compiler but gmake binary-dist does not work. I 
assume a couple of variables are not set up. What is going there? The 
Makefile has the line:


BIN_DIST_DIRS=$($(Project)BinDistDirs)

where I don't find BinDistDirs

Cheers Christian

--- ghc/rts/package.conf.inplaceThu Apr 13 15:49:49 2006
+++ ghc/rts/package.conf.inplace~   Wed Apr 12 19:44:55 2006
@@ -429,7 +429,6 @@

 extra-libraries:   m
  , gmp
-  , rt
  , dl


Ok, does this help instead (compile stage1 with this change):

*** DriverState.hs.~1.116.2.2.~ 2005-10-13 10:02:19.0 +0100
--- DriverState.hs  2006-04-13 15:32:02.0 +0100
***
*** 418,423 
--- 418,425 
  #if defined(freebsd_TARGET_OS)
  -optc-pthread
  , -optl-pthread
+ #elif defined(solaris_TARGET_OS)
+ -optl-lrt
  #endif
] ),



-bash-3.00$ gmake binary-dist Project=Ghc
rm -rf /home/maeder/haskell/solaris/ghc-6.4.2.20060411/-
rm -f /home/maeder/haskell/solaris/ghc-6.4.2.20060411/-.tar.gz
echo BIN_DIST_DIRS = 
BIN_DIST_DIRS =
/bin/sh: syntax error at line 1: `;' unexpected
gmake: *** [binary-dist-pre] Error 2


Very strange.  It works here, because I built two binary dists last night.

GhcBinDistDirs is set by ghc/mk/config.mk, which is included by the 
top-level Makefile.


$ make show Project=Ghc VALUE=GhcBinDistDirs
GhcBinDistDirs=ghc libraries hslibs

what does this do in your tree?

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 6.4.2.20060411 under solaris

2006-04-13 Thread Christian Maeder

Simon Marlow wrote:
GhcBinDistDirs is set by ghc/mk/config.mk, which is included by the 
top-level Makefile.


I've no such variable in ghc/mk/config.mk or ghc/mk/config.mk.in

C.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 6.4.2.20060411 under solaris

2006-04-13 Thread Christian Maeder

Simon Marlow wrote:


GhcBinDistDirs is set by ghc/mk/config.mk, which is included by the 
top-level Makefile.


I see, there's another mk/config.mk in the subdirectory ghc


$ make show Project=Ghc VALUE=GhcBinDistDirs
GhcBinDistDirs=ghc libraries hslibs


in this subdirectory I get the same result, but
'GhcBinDistDirs=' one level up.

Christian
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Serious bug with ghc FC5

2006-04-13 Thread Alain Cremieux

(resent after being indefinitely held in fedora-haskell validation queue)

Hi,

1) I have installed FC5 on 2 different machines.  On my Athlon1800+
everything works perfectly.
My other machine is a Pentium IV with hyperthreading, considered by
Linux as SMP (x86 32). This is where problems occur

2) I have installed GHC-6.4.1 from Fedora Extras
When I compile 'Omega' with it, Omega (which has a read-eval system)
produces a 'mallocBytesRWX: failed to protect 0x' message

I have compiled ghc-6.5... from the head. Compiled with GHC-6.4.1, I get
the 'mallocBytesRWX...' message when I run 'ghci'

3) I have then patched 'RtsUtils.c' to suppress the
'barf(mallocBytesRWX...', and rerun- the compilation of ghc-6.5...
'Omega' compiled with the resulting ghc-6.5 works OK. ghci still
produces the 'malloc...' error

4) When I compile ghc-6.5... with my patched ghc-6.5..., ghci does not
produce the 'mallocBytesRWX' error (which is logical), but in some cases
creates a 'segfault'

So obviously the ghc-6.4.1 rpm is incorrect, as is the source ghc HEAD
version. Probably an incorrect #ifdef here :
#elif defined(openbsd_HOST_OS) || defined(linux_HOST_OS) ||
defined(darwin_HOST_OS)
on a Pentium IV machine.

And there is probably something else somewhere.

Alain


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Serious bug with ghc FC5

2006-04-13 Thread Jon Fairbairn
On 2006-04-13 at 20:18+0200 Alain Cremieux wrote:
 (resent after being indefinitely held in fedora-haskell validation queue)
 
 Hi,
 
 1) I have installed FC5 on 2 different machines.  On my Athlon1800+
 everything works perfectly.
 My other machine is a Pentium IV with hyperthreading, considered by
 Linux as SMP (x86 32). This is where problems occur
 
 2) I have installed GHC-6.4.1 from Fedora Extras
 When I compile 'Omega' with it, Omega (which has a read-eval system)
 produces a 'mallocBytesRWX: failed to protect 0x' message

Is this an SELinux issue like the one I posted on Trac
(#738)? Does it still occur if you do a setenforce
Permissive?

Cheers,
  Jón

-- 
Jón Fairbairn  Jon.Fairbairn at cl.cam.ac.uk


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Possible runtime overhead of wrapping the IO monad?

2006-04-13 Thread Brian Hulley

Simon Peyton-Jones wrote:

Brian

I've committed a fix for this. By which I mean that you don't need to
write dropRenderM.  You can just use RenderM as if it were IO.

The change won't be in 6.4.2, but it's in the HEAD and will be in 6.6


Thanks!

Cheers, Brian.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


GHC and Cygwin/MinGW

2006-04-13 Thread Rich Fought



Hello,

I'm trying to port a 
linux-based Haskell application over to Win32. I am fiddling with both 
MinGW and Cygwin
with 
varying 
degrees of bafflement. This is a server app that utilizes secure connections 
with the GnuTLS libraries.

From what I 
understand, the Win32 version of GHC targets MinGW. Does this mean that 
the GnuTLS libraries
must be compiled for 
MinGW as well, or is it possible to link in cygwin 
libraries 
via the cygwin.dll somehow?

Also, any ideas how 
difficult a Cygwin port of GHC would be? Tips would be much 
appreciated.

Thanks,
Rich

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell] standard prelude and specifications

2006-04-13 Thread Laszlo Nemeth

Hi,

Chp 8 of the Haskell Report says:

In this chapter the entire Haskell Prelude is given. It constitutes a 
*specification* for the Prelude.
Many of the definitions are written with clarity rather than 
efficiency in mind, and it is not required

that the specification be implemented as shown here.




My question is how strictly this word specification is to be 
interpreted? I can think of a strict and a loose interpretation:


(1 - strict) Whatever invariant I can read out from the code which is 
given I am allowed to interpret it as part of the specification. e.g  
here is the code for filter:


filter :: (a - Bool) - [a] - [a]
filter p [] = []
filter p (x:xs) | p x   = x : filter p xs
  | otherwise = filter p xs

Primarily this states that the resulting list have no elements for which 
the predicate 'p' does not hold.


But I can also read out from the code that filter does not rearrange the 
elements in the list: for example if the list was sorted, it remains so 
afterwards. Under the strict interpretation this is also taken as part 
of specification for filter.


(2 - loose)  Filter is  meant to drop elements for which the predicate 
'p' doesn' t hold, and an implementation is free to rearrange the elements.


There must have been some discussion of this earlier but google didn't 
find anything useful.


Thanks, Laszlo
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] standard prelude and specifications

2006-04-13 Thread Malcolm Wallace
Laszlo Nemeth [EMAIL PROTECTED] wrote:

 Chp 8 of the Haskell Report says:
 
  In this chapter the entire Haskell Prelude is given. It constitutes
  a  *specification* for the Prelude.
 
 My question is how strictly this word specification is to be 
 interpreted? I can think of a strict and a loose interpretation:

Surely the obvious meaning is observational equivalence, also sometimes
known as referential transparency.

For any Prelude function specified as p, you can implement it
differently as p', provided that for all possible arguments x to p,

p x == p' x

So this is your non-loose interpretation.  I hesitate to use the word
strict, because that has a different accepted meaning in this context.
If p is strict in x, then p' must be strict in x as well, and vice
versa; if p is non-strict in x, then p' must also be non-strict.

The important point is that p and p' could use algorithms that belong to
different complexity classes, or have different constant factors, and
that is fine, provided the results are the same.

Regards,
Malcolm
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Platform-dependent behaviour with functions on NaN

2006-04-13 Thread Geisler, Tim (EXT)



In Haskell, the 
behaviour of functions on floating-point values which are NaNcan 
beplatform dependent.

On"SunOSsun 5.9 Generic_118558-09 sun4u sparc 
SUNW,Sun-Blade-1500":
Prelude ceiling 
(0/0)359538626972463141629054847463408713596141135051689993197834953606314521560057077521179117265533756343080917907028764928468642653778928365536935093407075033972099821153102564152490980180778657888151737016910267884609166473806445896331617118664246696549595652408289446337476354361838599762500808052368249716736

On 
Windows:

Prelude ceiling 
(0/0)-269653970229347386159395778618353710042696546841345985910145121736599013708251444699062715983611304031680170819807090036488184653221624933739271145959211186566651840137298227914453329401869141179179624428127508653257226023513694322210869665811240855745025766026879447359920868907719574457253034494436336205824


Both machines use 
the binary distributions of GHC 6.4.1.

In our production code,this error 
(which is actually an error inour program) occured inside a quite complex 
_expression_ which can be simplified to max 0 (ceiling (0/0)). On Windows, 
we did not recognize the error in the program, because the complex _expression_ 
just returned 0. On Solaris,the complex _expression_returned 
this large number (which represents in the application "the number of CPUs 
needed in a certain device" ;-)


We develop Haskell programs on Windows and 
run them in production on Sparc with Solaris. It seems that we have to run 
special regression tests testing for differences between Sparc Solaris and 
Windows.

The Haskell 98 reporthttp://www.haskell.org/onlinereport/basic.html#sect6.4states: 
"The results of exceptional conditions (such as overflow or underflow) on the 
fixed-precision numeric types are undefined; an implementation may choose error 
(_|_, semantically), a truncated value, or a special value such as 
infinity, indefinite, etc."

There has been some discussion six years ago 
and nearly two years ago: http://blog.gmane.org/gmane.comp.lang.haskell.glasgow.user/month=20040801

Is there a chance to
- properly define the behaviour of functions 
depending on the function properFraction for values like NaN, Infinity, 
...?
- get an implementation of this in GHC which 
computes the same results for all platforms?

Perhaps the function properFraction could 
raise an exception in case of isNaN and isInfinity?

Other languages are more portable. E.g., for 
Java, these cases are defined. http://java.sun.com/docs/books/jls/second_edition/html/typesValues.doc.html#9249states: 
"All numeric operations with NaN as an operand produce NaN as a 
result."

Tim
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Re: Platform-dependent behaviour with functions on NaN

2006-04-13 Thread Simon Marlow

[ CC'ing glasgow-haskell-users@haskell.org ]

Geisler, Tim (EXT) wrote:
In Haskell, the behaviour of functions on floating-point values which 
are NaN can be platform dependent.
 
On SunOS sun 5.9 Generic_118558-09 sun4u sparc SUNW,Sun-Blade-1500:

Prelude ceiling (0/0)
359538626972463141629054847463408713596141135051689993197834953606314521560057077521179117265533756343080917907028764928468642653778928365536935093407075033972099821153102564152490980180778657888151737016910267884609166473806445896331617118664246696549595652408289446337476354361838599762500808052368249716736
 
On Windows:
 
Prelude ceiling (0/0)

-269653970229347386159395778618353710042696546841345985910145121736599013708251444699062715983611304031680170819807090036488184653221624933739271145959211186566651840137298227914453329401869141179179624428127508653257226023513694322210869665811240855745025766026879447359920868907719574457253034494436336205824
 
Both machines use the binary distributions of GHC 6.4.1.


I assure you this isn't intentional.  In fact, I'm not sure why Sparc 
should be any different.  I don't have any Sparc machines to test on, 
and on all the platforms I have access to here I get a consistent answer 
(the same as the Windows answer you quoted above).


As far as I can tell, GHC is just using the Prelude definitions of the 
functions involved, there is no platform-specific code at the Haskell level.


What does 'decodeFloat (0/0)' return on your Sparc?

There has been some discussion six years ago and nearly two years ago: 
http://blog.gmane.org/gmane.comp.lang.haskell.glasgow.user/month=20040801
 
Is there a chance to
- properly define the behaviour of functions depending on the function 
properFraction for values like NaN, Infinity, ...?


This is a question for haskell-prime, to be answered by people who know 
more about floating point than I do...


- get an implementation of this in GHC which computes the same results 
for all platforms?


I would certainly hope so, if we can find the source of the discrepancy 
and devise a fix.


Perhaps the function properFraction could raise an exception in case of 
isNaN and isInfinity?


Sounds plausible.  Does anyone have any objections?

Cheers,
Simon

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] ANN: MissingH 0.14.2

2006-04-13 Thread John Goerzen
Hello,

I'm pleased to announce the release of version 0.14.2 of MissingH.

New features since 0.14.0 include:

 * New module MissingH.Path.Glob.  This module expands wildcards by
   examining the filesystem.  For instance, given the pattern
   /*bin/*sh, you might get back [/bin/bash, /bin/sh, /sbin/sash]

 * New module MissingH.Path.WildMatch.  This module evaluates
   whether a given string matches a POSIX-style wildcard.  It can also
   convert such a wildcard into a regular expression for use with
   Text.Regex.

 * New function MissingH.List.hasAny :: Eq a = [a] - [a] - Bool
   It returns true if the given list contains any of the elements in
   the search list

 * New function MissingH.IO.HVFS.vDoesExist, which returns true if the
   named object exists on the filesystem, regardless of what type of
   object it is

 * New function MissingH.Str.escapeRe, which takes a string and makes
   a regular expression that matches the string literally.  It takes
   care to properly escape all characters that could have special
   meaning in a regular expression.

 * ConfigParser now gives more helpful error messages when possible,
   including section and option in most error messages

 * Total number of unit tests for MissingH now stands at 716.

As always, MissingH can be downloaded from
http://quux.org/devel/missingh or, after a few days, from
http://http.us.debian.org/debian/pool/main/m/missingh

MissingH.Path.Glob and WildMatch are high-level ports of similar
modules in Python.

Thanks,

-- 
John Goerzen
Author, Foundations of Python Network Programming
http://www.amazon.com/exec/obidos/tg/detail/-/1590593715
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] lhs2TeX-friendly emacs mode?

2006-04-13 Thread Conal Elliott
Is there a Haskell emacs mode that works well with lhs2TeX? Specifically (a) treating \begin{spec} ... \end{spec} like \begin{code}... \end{code}, and (b) coloring inline code (|expr|) and maybe inline verbatim (@expr@) as Haskell rather than LaTeX code.
 - Conal
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] ANNOUNCE: HAppS 0.8

2006-04-13 Thread Einar Karttunen
Hello,

HAppS - Haskell Application Server version 0.8 has been released and
contains a complete rewrite of the ACID and HTTP functionalities.

Features include:

* MACID - Monadic framework for ACID transactions:
  Write apps as a set of simple state transformers. MACID write-ahead
  logging and checkpointing make it easy for you to guarantee
  application integrity in the face of unplanned outages. MACID even
  guarantees that your side effects will be executed at-least-once if
  they can complete within a timelimit you define.
* HTTP Server:
  Performs better than Apache/PHP in our informal benchmarks (thanks to
  Data.FastPackedString), handles serving both large (video) files and
  lazy (javascript) streaming, supports HTTP-Auth, and more.
* SMTP Server
  Handle incoming email in your application without worrying about
  .procmail or other user level inbound mail configuration hackery. Just
  have the HAppS.SMTP listen on port 25 or have the system mail server
  SMTP forward mail for your app to some internal port.
* Mail delivery agent
  Stop worrying about making sure a separate local mail server or DNS is
  up and running to deliver your mail. HAppS takes care of making sure
  your mail is delivered as long as your application itself is running
  and makes sure no outbound mail is lost even with unplanned restarts.
* DNS resolver in pure Haskell
  For resolving MX records and concurrent queries. Can use an upstream
  DNS server or root servers directly.
* XML and XSLT
  Separate application logic from presentation using XML/XSLT. With
  HAppS, you can have your application output XML (via HTTP or SMTP) and
  handle style/presentation via separate XSLT files at runtime. HAppS
  takes care of doing server side XSLT for outbound mail and HTTP
  user-agents that don't support it client side.
* Sessions and much more!

Where to get?

http://happs.org/
darcs get http://happs.org/HAppS

--
Einar Karttunen
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


RE: FDs and confluence

2006-04-13 Thread Simon Peyton-Jones
| there are interesting problems in FDs, but it seems that the
confluence
| problems were merely problems of the old translation, not anything
| inherent in FDs! I really had hoped we had put that phantom to rest.

Claus

You're doing a lot of work here, which is great.  Why not write a paper?
Even for people (like me) who are relatively familiar with FDs, it's
hard to follow a long email thread.  For others, who might well be
interested, it's even harder.  The phantom is not resting yet!   (On the
other hand, email can be a good way of developing the ideas, which is
what you have been doing.)

A good way forward might be to write a paper building on our recent JFP
submission, and proposing whatever changes and improvements you have
developed.  That would make your work accessible to a much wider
audience.  

Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: preemptive vs cooperative: attempt at formalization

2006-04-13 Thread Simon Marlow
On 12 April 2006 17:51, Malcolm Wallace wrote:

 Simon Marlow [EMAIL PROTECTED] wrote:
 
 By infinite loop, you mean both non-terminating, and non-productive.
 A non-terminating but productive pure computation (e.g. ones =
 1:ones) is not necessarily a problem.
 
 That's slightly odd terminology.  ones = 1:ones  is definitely
 terminating.  (length ones) is not, though.
 
 Well, the expression ones on its own is non-terminating.

under what definition of non-termination?  Non-termination has meant the
same as _|_ in a call-by-name semantics as far as I'm concerned, and
ones is most definitely not == _|_.

 So if you
 say putStrLn (show ones), it doesn't just sit there doing nothing.
 This infinite computation produces an infinite output.  So the fact
 that it is non-terminating is irrelevant.  I may very well want a
 thread to do exactly that, and even with a cooperative scheduler this
 is perfectly OK.  Other threads will still run just fine.

Depends entirely on whether putStrLn yields at regular intervals while
it is evaluating its argument.  If we are to allow cooperative
scheduling, then the spec needs to say whether it does or not (and
similarly for any IO operation you want to implicitly yield).

 The only time when other threads will *not* run under cooperative
 scheduling is when the non-terminating pure computation is *also*
 unproductive (like your length ones).

You seem to be assuming more about cooperative scheduling than eg. Hugs
provides.  I can easily write a thread that starves the rest of the
system without using any _|_s.  eg.

  let loop = do x - readIORef r; writeIORef r (x+1); loop in loop

I must be missing something.  The progress guarantee we have on the wiki
makes complete sense, but the fairness guarantee that John proposed
seems much stronger.

I had in mind defining something based on an operational semantics such
as in [1].  The cooperative guarantee would be something like if any
transition can be made, then the system will choose one to make, with
an extra condition about pure terms that evaluate to _|_, and a
guarantee that certain operations like yield choose the next transition
from another thread.  Preemtpive would remove the _|_ condition, the
yield condition, and add a fairness property.

Cheers,
Simon

[1] Asynchronous Exceptions in Haskell,
http://www.haskell.org/~simonmar/papers/async.pdf
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FFI, safe vs unsafe

2006-04-13 Thread Marcin 'Qrczak' Kowalczyk
John Meacham [EMAIL PROTECTED] writes:

 Checking thread local state for _every_ foregin call is definitly
 not an option either. (but for specificially annotated ones it is
 fine.)

BTW, does Haskell support foreign code calling Haskell in a thread
which the Haskell runtime has not seen before? Does it work in GHC?

If so, does it show the same ThreadId from that point until OS
thread's death (like in Kogut), or a new ThreadId for each callback
(like in Python)?

-- 
   __( Marcin Kowalczyk
   \__/   [EMAIL PROTECTED]
^^ http://qrnik.knm.org.pl/~qrczak/
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Concurrency, FFI status

2006-04-13 Thread Simon Marlow
This is just a heads up that I'm currently collating the current state
of the discussion re: concurrency and the FFI, with a view to
enumerating all the current issues with rationale on the wiki.  It's
getting to a state where I can't keep it all in my head at one time, and
I think this will help us to move forward, and give others a chance to
identify issues they would like to comment on.

So just in case anyone else was considering large changes to the
concurrency page on the wiki, please hold for a while.  I should have it
up by the end of the day.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: preemptive vs cooperative: attempt at formalization

2006-04-13 Thread Simon Marlow
On 13 April 2006 10:53, John Meacham wrote:

 On Thu, Apr 13, 2006 at 09:46:03AM +0100, Simon Marlow wrote:
 You seem to be assuming more about cooperative scheduling than eg.
 Hugs provides.  I can easily write a thread that starves the rest of
 the system without using any _|_s.  eg.
 
   let loop = do x - readIORef r; writeIORef r (x+1); loop in loop
 
 this is a non-productive non-cooperative loop, as in _|_.

Ok, I'm confused because I'm thinking in terms of operational semantics
for IO.

Maybe a way to describe this is to give a meaning to an value of type IO
as a lazy sequence of yields and effects, with some way of evaluating
an IO action in the context of the world state, to get the next yield or
effect together with a continuation and the new world state.  Running an
IO action may give _|_ instead of the next yield or effect; ok.

Still, I think the operational semantics interpretation works fine too.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: deeqSeq proposal

2006-04-13 Thread Lennart Augustsson

Jan-Willem Maessen wrote:


On Apr 11, 2006, at 5:37 PM, Lennart Augustsson wrote:


Yes, I realize than dynamic idempotence is not the same as
cycle detection.  I still worry. :)

I think expectance is in the eye of the beholder.  The reason
that (the pure subset of) pH was a proper implementation of
Haskell was because we were not over-specifying the semantics
originally.  I would hate to do that now.


Though, to be fair, an awful lot of Prelude code didn't work in pH 
unless it was re-written to vary slightly from the specification.  So 
the assumption of laziness was more deeply embedded than the spec was 
willing to acknowledge.


-Jan-Willem Maessen


Well, if the pH scheduler had been fair I think the Prelude functions
would have been semantically correct (but maybe not efficient).

-- Lennart

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: preemptive vs cooperative: attempt at formalization

2006-04-13 Thread David Roundy
On Wed, Apr 12, 2006 at 05:50:40PM +0100, Malcolm Wallace wrote:
 The argument John was making is that this is a useful distinguishing
 point to tell whether your concurrent implementation is cooperative or
 preemptive.  My argument is that, even if you can distinguish them in
 this way, it is not a useful distinction to make.  Your program is
 simply wrong.  If you have a sequential program whose value is _|_, your
 program is bad.  If you execute it in parallel with other programs, that
 does not make it any less bad.  One scheduler reveals the wrongness by
 hanging, another hides the wrongness by letting other things happen.  So
 what?  It would be perverse to say that the preemptive scheduler is
 semantically better in this situation.

I understood John's criterion in terms of a limiting case that can be
exactly specified regarding latency.  As I see it, the point of preemptive
systems is to have a lower latency than cooperative systems, and this is
also what distinguishes the two.  But the trouble is that preemptive
systems can't have a fixed latency guarantee, and shouldn't be expected to.
So he's pointing out that at a minimum, a preemptive system should always
have a latency less than infinity, while a cooperative system always *can*
have an infinite latency.  While you're right that the limiting case is bad
code, the point isn't to handle that case well, the point is to emphasize
the close-to-limiting case, when a pure function might run for longer than
your desired latency.  His spec does this in a rigorous, but achievable
manner (i.e. a useful spec).
-- 
David Roundy
http://www.darcs.net
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: FFI, safe vs unsafe

2006-04-13 Thread Simon Marlow
On 13 April 2006 10:02, Marcin 'Qrczak' Kowalczyk wrote:

 John Meacham [EMAIL PROTECTED] writes:
 
 Checking thread local state for _every_ foregin call is definitly
 not an option either. (but for specificially annotated ones it is
 fine.)
 
 BTW, does Haskell support foreign code calling Haskell in a thread
 which the Haskell runtime has not seen before? Does it work in GHC?

Yes, yes.

 If so, does it show the same ThreadId from that point until OS
 thread's death (like in Kogut), or a new ThreadId for each callback
 (like in Python)?

A new ThreadId, but that's not a conscious design decision, just a
symptom of the fact that we don't re-use old threads.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Concurrency, FFI status

2006-04-13 Thread Simon Marlow
I have now summarised the concurrency proposal status, here:

 
http://hackage.haskell.org/cgi-bin/haskell-prime/trac.cgi/wiki/Concurren
cy

I have tried to summarise the various points that have arisen during the
discussion.  If anyone feels they have been mis-paraphrased, or I have
forgotten something, please feel free to edit, or send me some text for
inclusion.  I don't want to include long gobs of text in here, though:
just summarise the main points, and if necessary link to relevant
mailing list posts.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FDs and confluence

2006-04-13 Thread Iavor Diatchki
Hello,

On 4/12/06, Claus Reinke [EMAIL PROTECTED] wrote:
 that's why Ross chose a fresh variable in FD range position:
 in the old translation, the class-based FD improvement rule no
 longer applies after reduction because there's only one C constraint
 left, and the instance-based FD improvement rule will only instantiate
 the 'b' or 'c' in the constraint with a fresh 'b_123', 'b_124', ..,
 unrelated to 'b', 'c', or any previously generated variables in the
 constraint store.

I understand the reduction steps.  Are you saying that the problem is
that the two sets are not syntactically equal?   To me this does not
seem important: we just end up with two different ways to say the same
thing (i.e., they are logically equivalent).  I think there would
really be a problem if we could do some reduction and end up with two
non-equivalent constraint sets, then I think we would have lost
confluence.  But can this happen?

 another way to interpret your message: to show equivalence of
 the two constraint sets, you need to show that one implies the
 other, or that both are equivalent to a common constraint set -
I just used this notion of equivalance, becaue it is what we usually
use in logic.  Do you think we should use something else?

 you cannot use constraints from one set to help discharging
 constraints in the other.
I don't understand this, why not?  If I want to prove that 'p iff q' I
assume 'p' to prove 'q', and vice versa.  Clearly I can use 'p' while
proving 'q'.  We must be talking about different things :-)

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Haskell prime wiki

2006-04-13 Thread Iavor Diatchki
Hello,
The wiki page says that we should alert the committee about
inaccuracies etc of pages, so here are some comments about the page on
FDs
(http://hackage.haskell.org/trac/haskell-prime/wiki/FunctionalDependencies)

1) The example for non-termination can be simplified to:
f = \x y -  (x .*. [y]) `asTypeOf` y

2) The example for 'non-confluence' has a typo (bullet 2 should have a
'c' not a 'b',  as it is the the two are syntactically equal :-))

3) In the section on references it seems relevant to add a reference
to Simplifying and Improving Qualified Types by Mark Jones, because
it provides important background on the topic.

Hope this helps
-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FDs and confluence

2006-04-13 Thread Ross Paterson
On Thu, Apr 13, 2006 at 12:07:53PM -0700, Iavor Diatchki wrote:
 On 4/12/06, Claus Reinke [EMAIL PROTECTED] wrote:
  that's why Ross chose a fresh variable in FD range position:
  in the old translation, the class-based FD improvement rule no
  longer applies after reduction because there's only one C constraint
  left, and the instance-based FD improvement rule will only instantiate
  the 'b' or 'c' in the constraint with a fresh 'b_123', 'b_124', ..,
  unrelated to 'b', 'c', or any previously generated variables in the
  constraint store.
 
 I understand the reduction steps.  Are you saying that the problem is
 that the two sets are not syntactically equal?   To me this does not
 seem important: we just end up with two different ways to say the same
 thing (i.e., they are logically equivalent).

If c were mentioned in another constraint, they would not be equivalent.

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: deeqSeq proposal

2006-04-13 Thread Jan-Willem Maessen


On Apr 12, 2006, at 4:25 PM, John Meacham wrote:


On Wed, Apr 12, 2006 at 09:21:10AM -0400, Jan-Willem Maessen wrote:

Though, to be fair, an awful lot of Prelude code didn't work in pH
unless it was re-written to vary slightly from the specification.  So
the assumption of laziness was more deeply embedded than the spec was
willing to acknowledge.


out of curiosity what sort of things had to be rewritten? I have been
toying with the idea of relaxing sharing to help some optimizations  
and

was curious what I was in for.


Well, the differences really had to do with termination under an  
eager strategy.


But beyond obvious problems such as defining things in terms of take  
+ iterate (numericEnumFrom[Then]To is an obvious example), we ran  
into terrible performance problems with Read instances.  Programs  
would spend minutes running read, then a few fractions of a second  
computing.  We ended up doing a lot of tweaking, none of which was  
ever quite correct.  Ditching ReadS in terms of ReadP would do an  
enormous amount of good here, I think---it would at least put all the  
re-coding in one centralized place, which is what we ended up having  
to do anyhow.


Finally, there are a bunch of Haskell idioms which don't work in pH.   
The most obvious examples are numbering a list:

   zip [0..] xs
and where-binding a value which is unused in one clause:

f x
  | p x = ... r ...
  | q x = ... r ...
  | otherwise = ... no r ...
  where r = something very expensive

I suppose you could view this as a sharing problem: the expression  
r is shared down two of the branches and not down the other.  But I  
don't think that's what you meant.


A lot of these can be solved by a certain amount of code motion---but  
note that this code motion changes the termination properties of the  
program as it was written.  In pH that was naughty.


-Jan



John

--
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: collecting requirements for FDs

2006-04-13 Thread Claus Reinke



What other libraries should Haskell' support, and what are their
requirements?


useful initiative! will your collection be available anywhere?

may I suggest that you (a) ask on the main Haskell and library lists
for better coverage (I would have thought that the alternative Num
prelude suggestions might have some use cases), and (b) collect 
non-use cases as well (eg, where current implementations are 
buggy/incomplete/do different things, or where other reasons have 
prevented Haskellers from using FDs so far)? I think trying to clean

up the latter will be more effective than wading through dozens of
variations of the same working examples - you're looking for 
counter-examples to the current design, aren't you?


and just in case you haven't got these on your list already, here are 
some examples from earlier discussions on this mailing list:


- ticket #92 has module Data.Records attached to it.
   http://hackage.haskell.org/trac/haskell-prime/ticket/92
   I'd like to be able to use that in Haskell'. the library is useful in 
   itself (I've used its record selection and concatenation parts when 
   encoding attribute grammars), and I also suggested it as a good 
   testcase for Haskell' providing a sufficient (but cleaned-up) subset 
   of currently available features. but it is also an example of code that


   - works with GHC, but not with Hugs; one of those problems 
   I reported on hugs-bugs:

   http://www.haskell.org//pipermail/hugs-bugs/2006-February/001560.html

   and went through a few of the Hugs/GHC differences here 
   on this mailing list:

   http://www.haskell.org//pipermail/haskell-prime/2006-February/000577.html
   
   and used the Select example to motivate the need for relaxed

   coverage in termination checking:
   http://www.haskell.org//pipermail/haskell-prime/2006-February/000825.html

   I have since come to doubt that GHC really solves the issue,
   it just happens that its strategy of delaying problems until they may
   no longer matter works for this example; but one can construct other 
   examples that expose the problem in spite of this delayed complaining 
   trick. see my own attempts to show FD problems here:

   http://www.haskell.org//pipermail/haskell-prime/2006-February/000781.html

   or Oleg's recent example on haskell-cafe:
   http://www.haskell.org//pipermail/haskell-cafe/2006-April/015372.html
   
   while I didn't quite agree with his interpretation (see my answer

   to his message), he did manage to construct an example in which
   GHC accepts a type/program in violation of an FD.

   - requires complex workarounds, thanks to current restrictions,
   where the same could be expressed simply and directly without;
   (compare the code for Remove in Data.Record.hs: the one in 
comments vs the one I had to use to make GHC happy)


- things like a simple type equality predicate at the type class level
   run into problems with both GHC and Hugs. reported to both
   GHC and Hugs bugs lists as:
   http://www.haskell.org//pipermail/hugs-bugs/2006-February/001564.html

- the FD-visibility limitations strike not only at the instance level. 
   here is a simplified example of a problem I ran into when trying 
   to encode ATS in FDs (a variable in a superclass constraint that

   doesn't occur in the class head, but is determined by an FD on
   the superclass constraint):
   http://hackage.haskell.org/trac/ghc/ticket/714

- the HList library and associated paper also use and investigate
   the peculiarities of FDs, and variations on the TypeEq theme
   (it has both unpractical/portable and practical versions that 
make essential use of some limitations in GHC's type class

implementation to work around other of its limitations; it
demonstrates wonderfully why the current story needs to
be cleaned up!):
   http://homepages.cwi.nl/~ralf/HList/

hope that's the kind of thing you are looking for?-)

cheers,
claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FDs and confluence

2006-04-13 Thread Ross Paterson
On Thu, Apr 13, 2006 at 05:10:36PM -0700, Iavor Diatchki wrote:
   I understand the reduction steps.  Are you saying that the problem is
   that the two sets are not syntactically equal?   To me this does not
   seem important: we just end up with two different ways to say the same
   thing (i.e., they are logically equivalent).
 
  If c were mentioned in another constraint, they would not be equivalent.
 
 How so?  A concrete example would really be useful.  I think that the
 constraint 'C [a] b d' and 'C [a] c d' are equivalent and I don't see
 how the rest of the context can affect this (of course I have been
 wrong in the past :-).

They are equivalent, but C [a] b d, Num c and C [a] c d, Num c are not.

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Defaults for superclass methods

2006-04-13 Thread Dave Menendez
John Meacham writes:

 On Tue, Apr 11, 2006 at 11:35:09AM +0100, Simon Marlow wrote:
  On 11 April 2006 11:08, Ross Paterson wrote:
  
   On Tue, Apr 11, 2006 at 11:03:22AM +0100, Simon Marlow wrote:
   This is a rather useful extension, and as far as I can tell it
   doesn't have a ticket yet: 
   
  
http://www.haskell.org//pipermail/libraries/2005-March/003494.html
   
   should I create a ticket?  Is there any reason it might be hard
   to implement?
   
   There are a range of proposals, but none of them are implemented.
   Wouldn't that rule them out for Haskell'?
  
  If it's not clear which is the right way to go, then yes I guess
  that does rule it out.  Could you summarise the proposals?  If
  there was a clear winner, and it was easy enough to implement,
  perhaps we can knock up a prototype in time.
 
 As I recall, this was brought up a few times during the class alias
 discussion and there were good technical reasons why it would be
 tricky to define a sane semantics for it. as in, it's harder than it
 first looks.

The tricky part is dealing with multiple subclasses.

For example,

class Functor f where
fmap :: (a - b) - f a - f b

class Functor f = Monad f where
...
fmap = liftM

class Functor f = Comonad f where
...
fmap = liftW

newtype Id a = Id a

instance Functor Id
instance Monad Id
instance Comonad Id

Which default gets used for fmap?
-- 
David Menendez [EMAIL PROTECTED] | In this house, we obey the laws
http://www.eyrie.org/~zednenem  |of thermodynamics!
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


[Haskell-cafe] Re: coherence when overlapping?

2006-04-13 Thread william kim

Thank you oleg.

Sulzmann et al use guards in CHR to turn overlapping heads (instances) into 
non-overlapping. Their coherence theorem still assumes non-overlapping.


I agree that what you described is the desirable behaviour when overlapping, 
that is to defer the decision when multiple instances match. However, why 
this is coined as coherence? What is the definition of coherence in this 
case?


class C a where
  f::a - a
instance C Int where
  f x = x+1
instance C a where
  f x = x

g x = f x

In a program like this, how does a coherence theorem rules out the 
incoherent behaviour of early committing the f to the second instance?


I am very confused. Please help.

--william


From: [EMAIL PROTECTED]
Reply-To: [EMAIL PROTECTED]
To: [EMAIL PROTECTED], haskell-cafe@haskell.org
Subject: Re: coherence when overlapping?
Date: 13 Apr 2006 03:46:38 -

 But I am still confused by the exact definition of coherence in the case 
of

 overlapping. Does the standard coherence theorem apply? If yes, how?
 If no, is there a theorem?

Yes, the is, by Martin Sulzmann et al, the Theory of overloading (the
journal version)
http://www.comp.nus.edu.sg/~sulzmann/ms_bib.html#overloading-journal

A simple intuition is this: instance selection may produce more than
one candidate instance. Having inferred a polymorphic type with
constraints, the compiler checks to see of some of the constraints can
be reduced. If an instance selection produces no candidate instances,
the typechecking failure is reported. If there is exactly one
candidate instance, it is selected and the constraint is removed
because it is resolved.  An instance selection may produce more then
one candidate instance. Those candidate instances may be incomparable:
for example, given the constraint C a and instances C Int and C
Bool, both instances are candidates. If such is the case, the
resolution of that constraint is deferred and it `floats out', to be
incorporated into the type of the parent expression, etc. In the
presence of overlapping instances, the multiple candidate instances
may be comparable, e.g. C a and C Int.  The compiler then checks
to see if the target type is at least as specific as the most specific
of those candidate instances. If so, the constraint is reduced;
otherwise, it is deferred.  Eventually enough type information is
available to reduce all constraints (or else it is a type error).


_
Find just what you are after with the more precise, more powerful new MSN 
Search. http://search.msn.com.sg/ Try it now.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: coherence when overlapping?

2006-04-13 Thread Martin Sulzmann

Coherence (roughly) means that the program's semantics is independent
of the program's typing.

In case of your example below, I could type the program
either use the first or the second instance (assuming
g has type Int-Int). That's clearly bound.

Guard constraints enforce that instances are non-overlapping.

instance C Int
instance C a | a =!= Int

The second instance can only fire if a is different from Int.

Non-overlapping instances are necessary but not sufficient to
obtain coherence. We also need that types/programs are unambiguous.

Martin


william kim writes:
  Thank you oleg.
  
  Sulzmann et al use guards in CHR to turn overlapping heads (instances) into 
  non-overlapping. Their coherence theorem still assumes non-overlapping.
  
  I agree that what you described is the desirable behaviour when overlapping, 
  that is to defer the decision when multiple instances match. However, why 
  this is coined as coherence? What is the definition of coherence in this 
  case?
  
  class C a where
 f::a - a
  instance C Int where
 f x = x+1
  instance C a where
 f x = x
  
  g x = f x
  
  In a program like this, how does a coherence theorem rules out the 
  incoherent behaviour of early committing the f to the second instance?
  
  I am very confused. Please help.
  
  --william
  
  From: [EMAIL PROTECTED]
  Reply-To: [EMAIL PROTECTED]
  To: [EMAIL PROTECTED], haskell-cafe@haskell.org
  Subject: Re: coherence when overlapping?
  Date: 13 Apr 2006 03:46:38 -
  
But I am still confused by the exact definition of coherence in the case 
  of
overlapping. Does the standard coherence theorem apply? If yes, how?
If no, is there a theorem?
  
  Yes, the is, by Martin Sulzmann et al, the Theory of overloading (the
  journal version)
  http://www.comp.nus.edu.sg/~sulzmann/ms_bib.html#overloading-journal
  
  A simple intuition is this: instance selection may produce more than
  one candidate instance. Having inferred a polymorphic type with
  constraints, the compiler checks to see of some of the constraints can
  be reduced. If an instance selection produces no candidate instances,
  the typechecking failure is reported. If there is exactly one
  candidate instance, it is selected and the constraint is removed
  because it is resolved.  An instance selection may produce more then
  one candidate instance. Those candidate instances may be incomparable:
  for example, given the constraint C a and instances C Int and C
  Bool, both instances are candidates. If such is the case, the
  resolution of that constraint is deferred and it `floats out', to be
  incorporated into the type of the parent expression, etc. In the
  presence of overlapping instances, the multiple candidate instances
  may be comparable, e.g. C a and C Int.  The compiler then checks
  to see if the target type is at least as specific as the most specific
  of those candidate instances. If so, the constraint is reduced;
  otherwise, it is deferred.  Eventually enough type information is
  available to reduce all constraints (or else it is a type error).
  
  _
  Find just what you are after with the more precise, more powerful new MSN 
  Search. http://search.msn.com.sg/ Try it now.
  
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: coherence when overlapping?

2006-04-13 Thread Martin Sulzmann

I believe that GHC's overlapping instance extensions
effectively uses inequalities.

Why do you think that 'inequalities' model 'best-fit'?

instance C Int  -- (1)
instance C a-- (2)

under a 'best-fit' instance reduction strategy
we would resolve C a by using (2).

'best-fit' should be very easy to implement.
Simply order instances (resulting CHRs) in an appropriate
'best-fit' order.

In case of

instance C Int   
instance a =!=Int | C a(2')

we can't reduce C a (because we can't satisfy a=!=Int)

Notice that (2') translates to

rule C a | a =!=Int == True

I think it's better to write a =!=Int not as part of the instance
context but write it as a guard constraint.

I don't think there's any issue for an implementation (either using
'best-fit' or explicit inequalities). The hard part is to establish
inference properties such as completeness etc.

Martin


Tom Schrijvers writes:
  On Thu, 13 Apr 2006, Martin Sulzmann wrote:
  
  
   Coherence (roughly) means that the program's semantics is independent
   of the program's typing.
  
   In case of your example below, I could type the program
   either use the first or the second instance (assuming
   g has type Int-Int). That's clearly bound.
  
   Guard constraints enforce that instances are non-overlapping.
  
   instance C Int
   instance C a | a =!= Int
  
   The second instance can only fire if a is different from Int.
  
   Non-overlapping instances are necessary but not sufficient to
   obtain coherence. We also need that types/programs are unambiguous.
  
  Claus Reinke was discussing this with me some time ago. He called it the 
  best fit principle, which would in a way, as you illustrate above, allow
  inequality constraints to the instance head. You could even write it like:
  
   instance (a /= Int) = C a
  
  as you would do with the superclass constraints... I wonder whether 
  explicit inequality constraints would be useful on their own in all the 
  places where type class and equality constraints are used (class and 
  instance declarations, GADTs, ...). Or maybe it opens a whole new can of 
  worms :)
  
  Tom
  
  --
  Tom Schrijvers
  
  Department of Computer Science
  K.U. Leuven
  Celestijnenlaan 200A
  B-3001 Heverlee
  Belgium
  
  tel: +32 16 327544
  e-mail: [EMAIL PROTECTED]
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Re: coherence when overlapping?

2006-04-13 Thread Simon Peyton-Jones
| I believe that GHC's overlapping instance extensions
| effectively uses inequalities.

I tried to write down GHC's rules in the manual:
http://haskell.org/ghc/dist/current/docs/users_guide/type-extensions.htm
l#instance-decls

The short summary is:
- find candidate instances that match
- if there is exactly one, choose it
- if the is more than one, choose the best fit UNLESS that choice
would be changed if a type variable was instantiated


Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Re: coherence when overlapping?

2006-04-13 Thread william kim

Thank you Martin.


Coherence (roughly) means that the program's semantics is independent
of the program's typing.

In case of your example below, I could type the program
either use the first or the second instance (assuming
g has type Int-Int). That's clearly bound.


If g has type Int-Int, it is not hard to say the first instance should 
apply.
But how about g having a polymorphic type? In this case it seems to me 
choosing the second instance is an acceptable choice as that is the only 
applicable one at the moment. What is the definition of a coherent 
behaviour here? Or is there one?




Non-overlapping instances are necessary but not sufficient to
obtain coherence. We also need that types/programs are unambiguous.


Do you therefore imply that coherence is not defined without the 
non-overlapping assumption?


--william

_
Get MSN Hotmail alerts on your mobile. 
http://mobile.msn.com/ac.aspx?cid=uuhp_hotmail


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: coherence when overlapping?

2006-04-13 Thread oleg

It seems that the subject is a bit more complex, and one can force GHC
to choose the less specific instance (if one confuses GHC well
enough): see the example below.

First of all, the inequality constraint is already achievable in
Haskell now: TypeEq t1 t2 False is such a constraint. One can
write polymorphic functions that distinguish if the argument a list
(any list of something) or not, etc. One can write stronger invariants
like records where labels are guaranteed to be unique.

There are two problems: first there are several notions of type
inequality, all of which are useful in different
circumstances. 
http://www.haskell.org/pipermail/haskell-prime/2006-March/000936.html

Second, how inequality interacts with functional dependencies -- and
in general, with the type improvement. And here, many interesting
things emerge. For example, the following code


 {-# OPTIONS -fglasgow-exts #-}
 {-# OPTIONS -fallow-undecidable-instances #-}
 {-# OPTIONS -fallow-overlapping-instances #-}

 module Foo1 where

 class C a b | a - b where
 f :: a - b

 instance C Int Bool where
 f x = True

 instance TypeCast Int b = C a b where
 f x = typeCast (100::Int)

 class TypeCast   a b   | a - b, b-a   where typeCast   :: a - b
 class TypeCast'  t a b | t a - b, t b - a where typeCast'  :: t-a-b
 class TypeCast'' t a b | t a - b, t b - a where typeCast'' :: t-a-b
 instance TypeCast'  () a b = TypeCast a b where typeCast x = typeCast' () x
 instance TypeCast'' t a b = TypeCast' t a b where typeCast' = typeCast''
 instance TypeCast'' () a a where typeCast'' _ x  = x

 class D a b | a - b where
 g :: a - b

 instance D Bool Bool where
 g x = not x

 instance TypeCast a b = D a b where
 g x = typeCast x

 test1 = f (42::Int) -- == True
 test2 = f 'a'   -- == 100

 test3 = g (1::Int)  -- == 1
 test4 = g True  -- == False

 bar x = g (f x) `asTypeOf` x

We see that test1 through test4 behave as expected. We can even define
the function 'bar'. It's inferred type is

*Foo1 :t bar
bar :: (C a b, D b a) = a - a

The question becomes: is this a function? Can it be applied to
anything at all? If we apply it to Int (thus instantiating the type a to
Int), the type b is instantiated to Bool, and so (following the
functional dependency for class D), the type a should be Bool (and it
is already an Int). OTH, if we apply bar to anything but Int, then
the type b should be Int, and so should the type a. Liar's paradox.
And indeed, bar cannot be applied to anything because the constraints
are contradictory.

What is more interesting is the slight variation of that example:

 class C a b | a - b where
 f :: a - b

 instance C Int Int where
 f x = 10+x

 instance TypeCast a b = C a b where
 f x = typeCast x

 class D a b | a - b where
 g :: a - b

 instance D Int Bool where
 g x = True

 instance TypeCast Int b = D a b where
 g x = typeCast (10::Int)

 test1 = f (42::Int)
 test2 = f 'a'

 test3 = g (1::Int)
 test4 = g True

 bar x = g (f x) `asTypeOf` x

 test5 = bar (1::Int)

*Foo :t bar
bar :: (C a b, D b a) = a - a

If bar is applied to an Int, then the type b should be an Int, so the
first instance of D ought to have been chosen, which gives the
contradiction a = Bool. And yet it works (GHC 6.4.1). Test5 is
accepted and even works 
*Foo test5
10


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: coherence when overlapping?

2006-04-13 Thread Claus Reinke
one can force GHC to choose the less specific instance (if one 
confuses GHC well enough): see the example below.


your second example doesn't really do that, though it may look that way.


class D a b | a - b where  g :: a - b
instance D Int Bool where  g x = True
instance TypeCast Int b = D a b where g x = typeCast (10::Int)

..

bar x = g (f x) `asTypeOf` x
test5 = bar (1::Int)

*Foo :t bar
bar :: (C a b, D b a) = a - a

If bar is applied to an Int, then the type b should be an Int, so the
first instance of D ought to have been chosen, which gives the
contradiction a = Bool. And yet it works (GHC 6.4.1). Test5 is
accepted and even works 
*Foo test5

10


your argument seems to imply that you see FD range parameters as
outputs of the instance inference process (*), so that the first Int 
parameter in the constraint D Int Int is sufficient to select the first 
instance (by ignoring the Int in the second parameter and using 
best-fit overlap resolution), leading to the contradiction Int=Bool.


alas, current FD implementations don't work that way..

Hugs will complain about the overlap being inconsistent with the FD,
for both C and D - does it just look at the instance heads?

GHC will accept D even without overlapping instances enabled, 
but will complain about C, so it seems that it takes the type equality

implied by FDs in instance contexts into account, seeing instances
D Int Bool and D a Int - no overlaps. similarly, when it sees a
constraint D Int Int, only the second instance head will match..

if you comment out the second C instance, and disable overlaps,
the result of test5 will be the same.


First of all, the inequality constraint is already achievable in
Haskell now: TypeEq t1 t2 False is such a constraint. 


as you noted, that is only used as a constraint, for checks after
instantiation, which is of little help as current Haskell has corners that 
ignore constraints (such as instance selection). specifically, there is a 
difference between the handling of type equality and type inequality: 
the former can be implied by FDs, which are used in instance 
selection, the latter can't and isn't (which is why I'd like to have

inequality constraints that are treated the same way as FD-based
equality constraints, even where constraints are otherwise ignored).

if we want to formalise the interaction of FDs and overlap resolution,
and we want to formalise the latter via inequality guards, then it 
seems that we need to put inequality constraints (negative type 
variable substitutions) on par with equality constraints (positive 
type variable substitutions).


cheers,
claus

(*) or does it seem to me to be that way because that is how I'd
   like FD range parameters to be treated?-)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The Marriage of Heaven and Hell: Type Classes and Bit-Twiddling

2006-04-13 Thread David F. Place
Sorry to respond to my own message, but I found a much more  
satisfactory way to solve this problem.  ghc is able to specialize it  
so that



data Test1 = Foo | Bar | Baaz | Quux deriving (Enum, Bounded)

sizeTest1 :: (Set Test1) - Int
sizeTest1 = sizeB


compiles into a call directly to size12.   I don't think I could do  
this in any other language (without classes and HM types.)  Hooray  
for Haskell!




setBound :: Bounded a = Set a - a
setBound s = maxBound

-- | /O(1)/. The number of elements in the set.
sizeB :: (Bounded a,Enum a) = Set a - Int
{-# INLINE sizeB #-}
sizeB s@(Set w) =
case fromEnum $ setBound $ (Set 0) `asTypeOf` s of
  x | x = 12 - fromIntegral $ size12 $ fromIntegral w
  x | x = 24 - fromIntegral $ size24 $ fromIntegral w
  x | x = 32 - fromIntegral $ size32 $ fromIntegral w
  _ - fromIntegral $ size64 $ fromIntegral w

size12 :: Word64 - Word64
size12 v = (v * 0x1001001001001 .. 0x84210842108421) `rem` 0x1f
size24' :: Word64 - Word64
size24' v = ((v .. 0xfff) * 0x1001001001001 .. 0x84210842108421)  
`rem` 0x1f

size24 :: Word64 - Word64
size24 v = (size24' v) + v .. 0xfff000) `shiftR` 12) *  
0x1001001001001 .. 0x84210842108421) `rem` 0x1f)

size32 :: Word64 - Word64
size32 v = (size24 v) + (((v `shiftR` 24) * 0x1001001001001 ..  
0x84210842108421) `rem` 0x1f)

size64 :: Word64 - Word64
size64 v = hi + lo
where lo = size32 $ v .. 0x
  hi = size32 $ v `shiftR` 32




David F. Place
mailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] Counting bits: Sanity Check

2006-04-13 Thread Bulat Ziganshin
Hello David,

Thursday, April 13, 2006, 12:55:05 AM, you wrote:

 Yes, especially curious since the algorithm is taken from AMD's
 optimization guide for the Athlon and Opteron series.  I'm not good  
 enough at reading core syntax to be able to see what GHC is doing  
 with it.

optimization for GHC is far away from low-level asm optimization so it
is not surprise that this don't work



-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Fundeps: I feel dumb

2006-04-13 Thread Creighton Hogg
On 13 Apr 2006 03:27:03 -, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 Creighton Hogg wrote:

  No instance for (MatrixProduct a (Vec b) c)
arising from use of `*' at interactive:1:3-5
  Probable fix: add an instance declaration for (MatrixProduct a
  (Vec b) c)
  In the definition of `it': it = 10 * (vector 10 ([0 .. 9]))

 Let us look at the instance

   class MatrixProduct a b c | a b - c where
 (*) :: a - b - c
   instance (Num a) = MatrixProduct a (Vec a) (Vec a) where

 it defines what happens when multiplying a vector of some numeric type
 'a' by a value _of the same_ type. Let us now look at the error
 message:
(MatrixProduct a (Vec b) c)

 That is, when trying to compile your expression
 10 * (vector 10 ([0 .. 9]))
 the typechecker went looking for (MatrixProduct a (Vec b) c)
 where the value and the vector have different numeric types. There is
 no instance for such a general case, hence the error. It is important
 to remember that the typechecker first infers the most general type
 for an expression, and then tries to resolve the constraints. In your
 expression,
 10 * (vector 10 ([0 .. 9]))
 we see constants 10, 10, 0, 9. Each constant has the type Num a = a.
 Within the expression 0 .. 9, both 0 and 9 must be of the same type
 (because [n .. m] is an abbreviation for enumFromThen n m, and
 according
 to the type of the latter
   enumFromThen :: a - a - [a]
 both arguments must be of the same type).

 But there is nothing that says that the first occurrence of 10 must be
 of the same numeric type as the occurrence of 9. So, the most general
 type assignment will be (Num a = a) for 10, and (Num b = b) for 9.

Thank you very much for the explanation:  it makes alot of sense.
So, if one does not want to for alot of type declarations into the
code, which would be fairly awkward, is there a way to do this with
fundeps or other type extensions that will be alot prettier or is any
way of defining type classes going to run into the same problems?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: coherence when overlapping?

2006-04-13 Thread Aaron Denney
On 2006-04-13, Martin Sulzmann [EMAIL PROTECTED] wrote:

 I believe that GHC's overlapping instance extensions
 effectively uses inequalities.

 Why do you think that 'inequalities' model 'best-fit'?

 instance C Int  -- (1)
 instance C a-- (2)

 under a 'best-fit' instance reduction strategy
 we would resolve C a by using (2).

 'best-fit' should be very easy to implement.
 Simply order instances (resulting CHRs) in an appropriate
 'best-fit' order.

 In case of

 instance C Int   
 instance a =!=Int | C a(2')

 we can't reduce C a (because we can't satisfy a=!=Int)

 Notice that (2') translates to

 rule C a | a =!=Int == True

 I think it's better to write a =!=Int not as part of the instance
 context but write it as a guard constraint.

 I don't think there's any issue for an implementation (either using
 'best-fit' or explicit inequalities). The hard part is to establish
 inference properties such as completeness etc.


This best-fit is essentially what people doing multi-method dispatch
want.  It turns out to not be as trivial as one would hope.

-- 
Aaron Denney
--

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] #if and #endif

2006-04-13 Thread ihope
I grabbed the source code to Haddock, but GHC doesn't like the #if's
and the #endif's. What can I do with these?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] #if and #endif

2006-04-13 Thread ihope
On 4/13/06, Jason Dagit [EMAIL PROTECTED] wrote:
 Try using passing -cpp to ghc when you compile.

 Jason

Thanks. Will do.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Local Fundeps [was: Fundeps: I feel dumb]

2006-04-13 Thread oleg

Creighton Hogg posed the following problem. Given a rather
straightforward matrix multiplication code

 -- The elements and the size
 data Vec a = Vec (Array Int a) Int deriving (Show,Eq)
 type Matrix a = (Vec (Vec a))
 class MatrixProduct a b c | a b - c where
 (*) :: a - b - c
 instance (Num a) = MatrixProduct a (Vec a) (Vec a) where
 (*) num (Vec a s) = Vec (fmap (*num) a) s
 vector n elms = Vec (array (0,n-1) $ zip [0..n-1] elms) n

we'd like to get the following straightforward test to compile:
 test = 10 * (vector 10 [0..9])

The trouble it doesn't: the function (*) has the type a-b-c
so the type of the scalar doesn't have to be in any a priori relation
with the type of the matrix (to permit efficient representations
for particular sorts of matrices). Such a general type for (*)
means that the test expression does not constrain 10 to be of the
same numeric type as the type of matrix elements. So, the inferred type
for test is
test :: (Num a, Num b, MatrixProduct a (Vec b) c) = c
The constraint must be resolved (top level, monomorphism restriction):
but it can't because there is no instance 
MatrixProduct a (Vec b) c
There is only more specialized instance, which doesn't match.

The solution exists: change test so that the type of the scalar and
the base type of the vector are the same:

 test1 = (10::Int) * (vector 10 [0..9::Int])
 test2 = let n = 10 in n * (vector 10 [0..(9 `asTypeOf` n)])

or add the type annotations in other ways. But that is annoying.

Creighton Hogg asked if there is another way? There is. Change the
instances to

 instance (Num a,Num b,TypeCast a b) = MatrixProduct a (Vec b) (Vec b) where
 (*) num (Vec a s) = Vec (fmap (* (typeCast num)) a) s

The difference is subtle but important: given such a general instance,
trying to resolve MatrixProduct a (Vec b) c now succeeds. Instance
selection is done only based on the types in the head; constraints are
not taken into account. When the match succeeds, GHC commits to it and
goes checking the constraints. One of the constraints, TypeCast a b,
says that the type a must be the same as b. Because 'a' and 'b' were
just type variables (could be instantiated), that constraint succeeds
and we accomplished our task.

Now the original test types and works.

In short: given the instance C a a and the constraint C a b, we
see failure: C a a can't match C a b. The type variables of the
latter aren't instantiated: this is matching, not the full
unification. The re-written instance TypeCast a b = C a b,
effectively describes the same set of types. Now, however,
C a b matches that instance -- and, the TypeCast constraint forces
a and b to be the same. These _local_, per-instance functional
dependencies notably improve the power of instance selection: from
mere matching to some type improvement (towards the full
unification). The net result is fewer type annotations required from
the user. One might consider that useful.


The full code:

{-# OPTIONS -fglasgow-exts #-}
{-# OPTIONS -fallow-undecidable-instances #-}
module Foo where

import Array
data Vec a = Vec (Array Int a) Int deriving (Show,Eq)
type Matrix a = (Vec (Vec a))
class MatrixProduct a b c | a b - c where
(*) :: a - b - c

{- previously:
instance (Num a) = MatrixProduct a (Vec a) (Vec a) where
(*) num (Vec a s) = Vec (fmap (*num) a) s
-}
instance (Num a,Num b,TypeCast a b) = MatrixProduct a (Vec b) (Vec b) where
(*) num (Vec a s) = Vec (fmap (* (typeCast num)) a) s

vector n elms = Vec (array (0,n-1) $ zip [0..n-1] elms) n

test = 10 * (vector 10 [0..9])
test1 = (10::Int) * (vector 10 [0..9::Int])
test2 = let n = 10 in n * (vector 10 [0..(9 `asTypeOf` n)])


class TypeCast   a b   | a - b, b-a   where typeCast   :: a - b
class TypeCast'  t a b | t a - b, t b - a where typeCast'  :: t-a-b
class TypeCast'' t a b | t a - b, t b - a where typeCast'' :: t-a-b
instance TypeCast'  () a b = TypeCast a b where typeCast x = typeCast' () x
instance TypeCast'' t a b = TypeCast' t a b where typeCast' = typeCast''
instance TypeCast'' () a a where typeCast'' _ x  = x
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] RuntimeLoader

2006-04-13 Thread Tim Newsham

Hi,
   I'm about to start playing with HWS-WP (web server + plugins).  It
relies on RuntimeLoader:

http://www.algorithm.com.au/wiki/hacking/haskell.ghc_runtime_loading

I grabbed the example and built it (only one minor tweak to imports
to get it to build) but it doesnt quite work:

$ ./src/TextFilter ./plugins/Lower.o  README
TextFilter: ./plugins/Lower.o: unknown symbol `__stginit_Char_'
TextFilter: user error (resolveFunctions failed?False)

There were also some warnings during building:

Compiling RuntimeLoader( ../runtime_loader/RuntimeLoader.hs, 
./RuntimeLoader

.o )
/tmp/ghc11951.hc: In function `s2Pj_ret':
/tmp/ghc11951.hc:170: warning: implicit declaration of function 
`unloadObj'

/tmp/ghc11951.hc: In function `s2JU_entry':
[...]

I assume this is because it is mucking with some ghc internals?
Is anyone familiar with this package?  Is there a more up-to-date
version or alternative?

Tim Newsham
http://www.lava.net/~newsham/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe