Re: Rules for class methods and Safe Haskell

2014-11-13 Thread Wolfgang Jeltsch

Am Freitag, den 15.08.2014, 23:10 +0300 schrieb Wolfgang Jeltsch:
 Hi,
 
 the module Control.Arrow declares a set of rules for the Arrow class. It
 is marked “Trustworthy”, probably to allow these rules to actually fire.
 
 Now these rules are only correct for class instances that actually
 satisfy the arrow laws. If the author of another module defines an
 instance of Arrow that does not respect the laws, this other module
 could still be considered “Safe” by GHC, although the rules from
 Control.Arrow are bogus now.
 
 Is this considered a problem?
 
 All the best,
 Wolfgang

Hi,

could someone please answer this e-mail? This issue is important for me.

All the best,
Wolfgang


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Rules for class methods and Safe Haskell

2014-11-13 Thread David Feuer
That's an interesting question. I'm not even close to an expert, but I
*think* that parametricity prevents those particular rules from breaking
Safe Haskell guarantees. The laws may not *hold* for a broken instance, but
I don't *think* that lets you break type safety or IO encapsulation.
On Nov 13, 2014 2:03 PM, Wolfgang Jeltsch g9ks1...@acme.softbase.org
wrote:


 Am Freitag, den 15.08.2014, 23:10 +0300 schrieb Wolfgang Jeltsch:
  Hi,
 
  the module Control.Arrow declares a set of rules for the Arrow class. It
  is marked “Trustworthy”, probably to allow these rules to actually fire.
 
  Now these rules are only correct for class instances that actually
  satisfy the arrow laws. If the author of another module defines an
  instance of Arrow that does not respect the laws, this other module
  could still be considered “Safe” by GHC, although the rules from
  Control.Arrow are bogus now.
 
  Is this considered a problem?
 
  All the best,
  Wolfgang

 Hi,

 could someone please answer this e-mail? This issue is important for me.

 All the best,
 Wolfgang


 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: RULES map (\x - x) = id

2013-01-18 Thread Simon Peyton-Jones
|  So I wonder: Why is rule map id2 not enough here?

Compile with -ddump-rules.  What you see is below.

* The rule map id actually has an LHS like this:
  myMap a (id a)
  where the (id a) is a type application.  It'll ONLY match a term that looks 
like
  myMap ty (id ty)
  where ty is some type.  It won't match  (myMap (\x.x))

* The rule map id2 has an LHS like
 myMap a (\x.x)
   If we try to match that against the term (myMap a (id a)), will it match?
   Sadly, GHC *does* expand 'id', to the term is equivalent to
  myMap a ((/\b.\x:b.x) a)
   To make this match (\x:a.x) you'd have to do beta-reduction on the fly during
   matching, at least for type applications.

Conclusion.  Matching functions is bound to be hard.  You could try improving 
the matcher, which is in Rules.lhs, function 'match'.  Doing some kind of on 
the fly beta reduction might be worth a try.  

I hope this is helpful. I would be happy if someone felt like augmenting the 
Commentary to cover some of this information.  (I could review.)

Simon

map id [ALWAYS]
forall (@ a{tv agb} [tv]).
  main:Rules.myMap{v r0} @ a{tv agb} [tv]
 @ a{tv agb} [tv]
 (base:GHC.Base.id{v r2s} [gid] @ a{tv agb} [tv])
  = base:GHC.Base.id{v r2s} [gid] @ [a{tv agb} [tv]]
map id2 [ALWAYS]
forall (@ a{tv ag3} [tv]).
  main:Rules.myMap{v r0} @ a{tv ag3} [tv]
 @ a{tv ag3} [tv]
 (\ (x{v aeP} [lid] :: a{tv ag3} [tv]) - x{v aeP} 
[lid])
  = base:GHC.Base.id{v r2s} [gid] @ [a{tv ag3} [tv]]

|  -Original Message-
|  From: glasgow-haskell-users-boun...@haskell.org 
[mailto:glasgow-haskell-users-
|  boun...@haskell.org] On Behalf Of Joachim Breitner
|  Sent: 17 January 2013 10:24
|  To: glasgow-haskell-users@haskell.org
|  Subject: RULES map (\x - x) = id
|  
|  Hi,
|  
|  I am experimenting with rewrite rules, but found that they do not fire as 
often as I
|  wanted them. Here is a small example:
|  
|  module MapId where
|  
|  myMap f [] = []
|  myMap f (x:xs) = f x : myMap f xs
|  
|  {-# RULES map id myMap id = id #-}
|  {-# RULES map id2 myMap (\x - x) = id #-}
|  
|  mapId = myMap id
|  mapIdApp x = myMap id x
|  
|  mapLamId = myMap (\x - x)
|  mapLamIdApp x = myMap (\x - x) x
|  
|  This works as expected, i.e. the latter four functions become some kind of 
identity.
|  But I am a bit surprised that I do need two rules to catch all cases, as I 
thought
|  that the simplifier „looks through“ definitions like id. But when I remove 
the
|  map id rule, the functions mapId
|  and mapIdApp are not optimized.
|  
|  So I wonder: Why is rule map id2 not enough here?
|  
|  Thanks,
|  Joachim
|  
|  
|  --
|  Joachim nomeata Breitner
|m...@joachim-breitner.de  |  nome...@debian.org  |  GPG: 0x4743206C
|xmpp: nome...@joachim-breitner.de | http://www.joachim-breitner.de/

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RULES for ByteString are not fired

2012-10-28 Thread Ian Lynagh

Hi Kazu,

On Tue, Aug 28, 2012 at 01:37:32PM +0900, Kazu Yamamoto wrote:
 
 I seems to us (my friends and me) that term rewriting rules for
 ByteString are not fired in recent GHCs.

Thanks for the report. I've filed a ticket here:
http://hackage.haskell.org/trac/ghc/ticket/7374


Thanks
Ian


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RULES for ByteString are not fired

2012-08-27 Thread Thomas DuBuisson
Another data point:

The bytestring 'break' rule fired fine for me (GHC 7.4.1 Linux x86-64).

On Mon, Aug 27, 2012 at 9:37 PM, Kazu Yamamoto k...@iij.ad.jp wrote:
 Hello,

 I seems to us (my friends and me) that term rewriting rules for
 ByteString are not fired in recent GHCs.

 6.12.3OK
 7.0.4 NG
 7.4.1 NG
 7.6.1RC1  NG

 For example, with the example from this ticket
 http://hackage.haskell.org/trac/ghc/ticket/3703
 results in as follows:

 % ghc -O  -ddump-simpl-stats --make breakOn.hs
 14 RuleFired
   4 Class op showsPrec
   2 Class op show
   2 eqChar#-case
   2 unpack
   2 unpack-list
   1 Class op ==
   1 Class op 

 There is no ByteString rules.

 Is this a bug or intention?

 --Kazu

 {-# LANGUAGE OverloadedStrings #-}

 import qualified Data.ByteString.Char8 as B

 main :: IO ()
 main = do
 let string1 = B.pack This is a string
 string2 = B.pack This is another string
 print (breakOn ' ' string1)
 print (breakOn ' ' string2)

 breakOn :: Char - B.ByteString - (B.ByteString, B.ByteString)
 breakOn c = B.break (c==)



 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: rules

2007-03-30 Thread skaller
On Fri, 2007-03-30 at 13:04 -0700, Tim Chevalier wrote:
 On 3/30/07, skaller [EMAIL PROTECTED] wrote:
  I'm curious when and how GHC applies rewrite rules,
  and how expensive it is.
 
 
 Have you seen the Playing By Rules paper?
 http://research.microsoft.com/~simonpj/Papers/rules.htm
 
 If you still have questions after reading it, you can always ask here.

Thanks for that reference! 

FYI, the authors comment that the possibility exists for
feeding reduction rules into a theorem prover but it has
not been explored.

This IS indeed being done right now by Felix. Felix
supports axioms, lemmas, and reduction rules. Lemmas
are theorems which are simple enough to be proven with
an automatic theorem prover.

A couple of days ago my first test case lemma was
proven by Ergo and Simplify. (roughly I proved
that given x + y = y + x and x + 0 = x, that
0 + x = x .. not exactly rocket science :)

Felix emits all the
axioms and lemmas in Why notation, which allows
a large class of theorem provers to have a go at
proving them, since Why translates them.

Reductions aren't handled yet because doing this raised
a simple syntactic problem: I have no way to distinguish
whether a reduction is an axiom or a lemma.

I would note also that default definitions of typeclass
functions are also axioms.

What would be REALLY cool is to prove that typeclass
instances conform to the specified semantics. That,
however, is much harder, especially in Felix (which
is an imperative language ;(

See also:

http://why.lri.fr

IMHO: connecting a programming language to a theorem prover
is probably mandatory for optimisation.  Obvious goals
other than reduction rules are elimination of pre-condition
checks for partial functions. However doing this is hard
because theorem provers can be slow, so generally have
to be run offline. Thus some way of caching the results
for a subsequent recompilation is required.

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RULES and type classes

2007-03-29 Thread Donald Bruce Stewart
haskell:
 Is there any way to use RULES substitutions with type classes?
 
 I'm writing a reactive programming arrow (same idea as Yampa, different
 design goals), and it would help performance (and not just in the speed
 sense) to be able to tell when a value derived with arr hasn't changed.
 So I'd like to be able to specialize arr to functions whose result is an
 instance of Eq.
 
 I tried 
 {-# RULES reactiveArr/Eq reactiveArr = reactiveArrEq #-}
 but got the message 
 
 Control/Arrow/Reactive/Reactive.hs:89:41:
 No instance for (Eq b)
   arising from instantiating a type signature
   at Control/Arrow/Reactive/Reactive.hs:89:41-89
 Possible fix: add (Eq b) to the tcRule
 When checking the transformation rule reactiveArr/Eq
 
 I tried adding various sorts of type signatures, but I couldn't find any
 way around this... is it a restriction in the RULES rewrite engine?  Is
 there a workaround, or some mechanism other than RULES that I should be
 using?  I could write a special arrEq function, but I'd like to
 minimize the number of extraneous operations outside the Arrow class.
 
 Thanks,
 Mike Hamburg
 

In particular, it would be nice to be able to specialise based on the
instances, as we do for [a] -- [Int], e.g.

RULES sum = sumInt :: [Int] - Int

is fine in the current system. So I could imagine some nice
specialisations based on say, the good old Ord:

RULES nub = nubOrd :: (Eq a, Ord a) = [a] - [a]

which might use a Map, say.

I don't know how costly this instance matching would be.

-- Don
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RULES and type classes

2007-03-29 Thread Pepe Iborra


On 29/03/2007, at 11:38, Mike Hamburg wrote:


Is there any way to use RULES substitutions with type classes?

I'm writing a reactive programming arrow (same idea as Yampa,  
different
design goals), and it would help performance (and not just in the  
speed
sense) to be able to tell when a value derived with arr hasn't  
changed.
So I'd like to be able to specialize arr to functions whose result  
is an

instance of Eq.

I tried
{-# RULES reactiveArr/Eq reactiveArr = reactiveArrEq #-}
but got the message

Control/Arrow/Reactive/Reactive.hs:89:41:
No instance for (Eq b)
  arising from instantiating a type signature
  at Control/Arrow/Reactive/Reactive.hs:89:41-89
Possible fix: add (Eq b) to the tcRule
When checking the transformation rule reactiveArr/Eq

I tried adding various sorts of type signatures, but I couldn't  
find any
way around this... is it a restriction in the RULES rewrite  
engine?  Is
there a workaround, or some mechanism other than RULES that I  
should be

using?  I could write a special arrEq function, but I'd like to
minimize the number of extraneous operations outside the Arrow class.



I thought this could be done already. In ghc 6.7:

\begin{code}
{-# RULES hello/helloBool hello = helloBool #-}
{-# RULES hello/helloEq forall (x::Eq a=a) . hello x = helloEq x #-}
{-# RULES hello/helloF forall (f::Eq b=a-b) . hello f = helloF f #-}

data O = O

hello _ = hello

helloBool :: Bool - String
helloBool _ = hello bool

helloEq :: Eq a = a - String
helloEq _ = hello Eq

helloF :: Eq b = (a-b) - String
helloF _ = hello F

f :: Eq b = String - b
f = undefined

normal = hello O
bool = hello False
char = hello 'a'feq :: String
feq  = hello (f::String - Bool)

\end{code}

pep:~/code/snippets$ ghc -O -fglasgow-exts -c rules.hs
pep:~/code/snippets$ ghci rules.hs
   ___ ___ _
  / _ \ /\  /\/ __(_)
/ /_\// /_/ / /  | |  GHC Interactive, version 6.7.20070303, for  
Haskell 98.

/ /_\\/ __  / /___| |  http://www.haskell.org/ghc/
\/\/ /_/\/|_|  Type :? for help.

Loading package base ... linking ... done.
[1 of 1] Skipping  Rules( rules.hs, rules.o )
Ok, modules loaded: Rules.
Prelude Rules normal
hello
Prelude Rules bool
hello bool
Prelude Rules char
hello Eq
Prelude Rules feq
hello F


Is that what you wanted?

pepe
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RULES and strictness

2006-12-01 Thread Kirsten Chevalier


when performing strictness/abscence/one-shot analysis, are rule bodies
taken into account?


No.


like, would the following cause trouble making const
no longer absent in its second argument?

const x y = x

{-# RULE const/seq forall a b . const a b = seq b a #-}

by trouble I mean the compiler failing or producing bad code in some
way, rather than the obvious trouble of changing the meaning of const.



No, because of the worker/wrapper transform. You can think of
strictness analysis as doing something like this in this case:

const x y = x
=

const x y = constW x

constW x = let y = error entered absent argument! in x

(of course, strictness analysis only does the work of determining that
const is absent in its second argument, and the worker/wrapper
transform phase is a separate pass that happens later.)

So, from then on, GHC will inline const wherever possible, and the
rule const/seq will no longer be applicable, because it applies to
const, not constW. Note that even if const doesn't end up getting
inlined for some reason, the presence of the rule doesn't cause any
problems. It's perfectly valid to replace (const a b) with (constW a)
or with (seq b a).


it is noted in the deforestation papers that rules can change the
sharing properties of code and we are okay with that. I was wondering if
they could safely change the strictness or abscence properties of code
as well?



Given the example above, I think it's fairly safe to say that rules
can safely change strictness and absence properties, or at least
safe in the sense that it shouldn't cause the compiler to crash or
to generate code that crashes. (Since even GHC's strictness analysis
on its own, ignoring the presence of rules, hasn't been formally
proven correct yet, I'm being rather hand-wavy in saying that.) But,
it is kind of weird to have a rule that changes these properties. I
realize the example you gave is just for the sake of argument, but I
can't think of a real situation where you'd want to have a rule that
changed strictness properties (rather than expressing the strictness
you wanted in the code), though I'm open to being convinced otherwise.

Cheers,
Kirsten

--
Kirsten Chevalier* [EMAIL PROTECTED] *Often in error, never in doubt
I flip on the television and watch sad movies / And look for sad sick people
like me -- Tegan and Sara
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: RULES pragmas

2006-07-14 Thread Simon Peyton-Jones
I've started a Wiki page, attached to GHC's collaborative documentation,
as a place to collect advice about RULES.
http://haskell.org/haskellwiki/GHC/Using_Rules

Please feel free to elaborate it.

Simon


| -Original Message-
| From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
| On Behalf Of Donald Bruce Stewart
| Sent: 12 July 2006 01:41
| To: Malcolm Wallace
| Cc: glasgow-haskell-users@haskell.org
| Subject: Re: RULES pragmas
| 
| Malcolm.Wallace:
|  I have a question about {-# RULES #-} pragmas.  Here is a very
simple
|  attempt to use them:
| 
|  module Simplest where
|  {-# RULES
|  simplestRule   forall x.   id (id x) = x
|#-}
|  myDefn = id (id 42)
| 
|  I want to verify whether ghc-6.4.1 does actually fire this rule, but
|  have so far been unable to do so.  According to the manual (section
|  7.10.5), the flag -ddump-rules should list simplestRule if it has
been
|  parsed correctly, and -ddump-simpl-stats should list the number of
times
|  it has fired.  But it does not appear in either listing.
| 
|  Reasoning that I have the syntax wrong, I have tried numerous
variations
|  on the indentation, added type signatures, etc., all to no avail.
| 
|  So what am I doing wrong?  And is there any way to ask the compiler
to
|  give a warning if the RULES pragma contains errors?
| 
| In this case, it's because it's missing -fglasgow-exts, I think.
| The following works for me with both 6.4 and 6.5 compilers:
| 
| module Simplest where
| 
| {-# RULES
| simplestRule forall x. id (id x) = x
| #-}
| 
| myDefn = id (id 42)
| 
| when compiled with:
| $ ghc-6.4.2 -fglasgow-exts -c -ddump-simpl-stats A.hs
| 
|  Grand total simplifier statistics
| Total ticks: 11
| 
| 2 PreInlineUnconditionally
| 3 PostInlineUnconditionally
| 1 UnfoldingDone
| 1 RuleFired
| 1 simplestRule
| 4 BetaReduction
| 2 SimplifierDone
| 
| However, in general, you need to be careful that your identifiers
| weren't inlined in the first phase. To control this we add an INLINE
[1]
| pragma to identifiers we want to match in rules, to ensure they
haven't
| disappeared by the time the rule matching comes around.
| 
| Also, you need -O to have rules kick in locally.
| 
| So,
| module Simplest where
| 
| {-# RULES
| simplestRule forall x. myid (myid x) = x
| #-}
| 
| myDefn = myid (myid 42)
| 
| myid x = x
| {-# INLINE [1] myid #-}
| 
| And:
| $ ghc-6.4.2 -fglasgow-exts -O -c -ddump-simpl-stats A.hs
| 
|  Grand total simplifier statistics

| Total ticks: 15
| 
| 6 PreInlineUnconditionally
| 2 UnfoldingDone
| 1 RuleFired
| 1 simplestRule
| 5 BetaReduction
| 1 KnownBranch
| 8 SimplifierDone
| 
| Cheers,
|   Don
| ___
| Glasgow-haskell-users mailing list
| Glasgow-haskell-users@haskell.org
| http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RULES pragmas

2006-07-12 Thread Malcolm Wallace
[EMAIL PROTECTED] (Donald Bruce Stewart) wrote:

  So what am I doing wrong?  And is there any way to ask the compiler
  to give a warning if the RULES pragma contains errors?
 
 In this case, it's because it's missing -fglasgow-exts, I think.

Ah, thank you.  The missing (and undocumented) option.  Is there any
reason why -fglasgow-exts should be required?  Judging by the flag
reference material in section 4.17.15, -frules-off is used to turn RULES
off explicitly, but there is no corresponding flag to turn them on -
hence I assumed they would be enabled by default when -O or -O2 is set.

Regards,
Malcolm
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RULES pragmas

2006-07-12 Thread Malcolm Wallace
Malcolm Wallace [EMAIL PROTECTED] wrote:

 Ah, thank you.  The missing (and undocumented) option.

Actually, now I came to submit a patch to the manual, I discover that it
/is/ documented, but at the beginning of section 7.  (But on the index
page on the web, the link to section 7 is two whole screenfuls away from
the link to 7.10, so it is no wonder I didn't think to look there
first.)

Maybe there are other subsections of 7 that could usefully gain a
similar pointer to the need for -fglasgow-exts?  For instance, are other
pragmas (INCLUDE, INLINE, UNPACK) only activated by -fglasgow-exts?

Regards,
Malcolm
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RULES pragmas

2006-07-12 Thread Simon Marlow

Malcolm Wallace wrote:

Malcolm Wallace [EMAIL PROTECTED] wrote:



Ah, thank you.  The missing (and undocumented) option.



Actually, now I came to submit a patch to the manual, I discover that it
/is/ documented, but at the beginning of section 7.  (But on the index
page on the web, the link to section 7 is two whole screenfuls away from
the link to 7.10, so it is no wonder I didn't think to look there
first.)

Maybe there are other subsections of 7 that could usefully gain a
similar pointer to the need for -fglasgow-exts?  For instance, are other
pragmas (INCLUDE, INLINE, UNPACK) only activated by -fglasgow-exts?


I believe RULES is the only pragma that requires -fglasgow-exts, the reason 
being that the syntax inside RULES uses the 'forall' keyword, which is only 
enabled by -fglasgow-exts.


If you could submit a doc patch, that would be great.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RULES pragmas

2006-07-11 Thread Donald Bruce Stewart
Malcolm.Wallace:
 I have a question about {-# RULES #-} pragmas.  Here is a very simple
 attempt to use them:
 
 module Simplest where
 {-# RULES
 simplestRule   forall x.   id (id x) = x
   #-}
 myDefn = id (id 42)
 
 I want to verify whether ghc-6.4.1 does actually fire this rule, but
 have so far been unable to do so.  According to the manual (section
 7.10.5), the flag -ddump-rules should list simplestRule if it has been
 parsed correctly, and -ddump-simpl-stats should list the number of times
 it has fired.  But it does not appear in either listing.
 
 Reasoning that I have the syntax wrong, I have tried numerous variations
 on the indentation, added type signatures, etc., all to no avail.
 
 So what am I doing wrong?  And is there any way to ask the compiler to
 give a warning if the RULES pragma contains errors?

In this case, it's because it's missing -fglasgow-exts, I think.
The following works for me with both 6.4 and 6.5 compilers:

module Simplest where

{-# RULES
simplestRule forall x. id (id x) = x
#-}

myDefn = id (id 42)

when compiled with:
$ ghc-6.4.2 -fglasgow-exts -c -ddump-simpl-stats A.hs

 Grand total simplifier statistics
Total ticks: 11

2 PreInlineUnconditionally
3 PostInlineUnconditionally
1 UnfoldingDone
1 RuleFired
1 simplestRule
4 BetaReduction
2 SimplifierDone

However, in general, you need to be careful that your identifiers
weren't inlined in the first phase. To control this we add an INLINE [1]
pragma to identifiers we want to match in rules, to ensure they haven't
disappeared by the time the rule matching comes around.

Also, you need -O to have rules kick in locally.

So, 
module Simplest where

{-# RULES
simplestRule forall x. myid (myid x) = x
#-}

myDefn = myid (myid 42)

myid x = x
{-# INLINE [1] myid #-}

And:
$ ghc-6.4.2 -fglasgow-exts -O -c -ddump-simpl-stats A.hs

 Grand total simplifier statistics 
Total ticks: 15

6 PreInlineUnconditionally
2 UnfoldingDone
1 RuleFired
1 simplestRule
5 BetaReduction
1 KnownBranch
8 SimplifierDone

Cheers,
  Don
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: RULES

2006-04-24 Thread Simon Peyton-Jones
I'm afraid that RULES are applied *after* type checking and dictionary
construction.  Recall that freeze has type

  freeze :: (Ix i, MArray a e m, IArray b e) = a i e - m (b i e)

Your original RULE gives 

Could not deduce (Unboxed e, HasDefaultValue e)
  from the context (IArray UArray e, MArray IOUArray e IO, Ix i)
  arising from use of `freezeIOUArray' at Foo.hs:16:28-41
Probable fix: add (Unboxed e, HasDefaultValue e) to the tcRule
When checking the transformation rule freeze/IOUArray

Why?  Because you are trying to rewrite an application
freeze dix dma dia  ---  freezeIOUArray dunb dhas

where dix, dma, dia are dictionaries of type Ix I, MArray IOUArray e m,
and IArray b e, respectively.  To do the rewrite we need to manufacture
dictionaries dunb::Unboxed e and dhas::HasDefaultValue e, respectively.
How can we make the latter from the former?  We can't.

Hence the error message.

Your working version will generate a rewrite like this:

freeze dix dma dia (x dunb dhas) ---  freezeIOUArray dunb dhas
x

(use -ddump-rules to see the rule that GHC generates).  This is a fine
rule, but it is most unlikely to match any actual terms!

I don't know an easy to way to do what you want.  Because you really do
need to come up with new dictionaries, and that changes the types.

Simon


| -Original Message-
| From: [EMAIL PROTECTED]
[mailto:glasgow-haskell-users-
| [EMAIL PROTECTED] On Behalf Of Bulat Ziganshin
| Sent: 21 April 2006 17:42
| To: GHC Users Mailing List
| Subject: RULES
| 
| Hello GHC,
| 
| i has the following function:
| 
| freezeIOUArray :: (Unboxed e, HasDefaultValue e, Ix i) = IOUArray i e
- IO (UArray i e)
| 
| when i try to use rule:
| 
| {-# RULES
| freeze/IOUArray freeze = freezeIOUArray
| #-}
| 
| the compiler barks. i found the way that compiled:
| 
| {-# RULES
| freeze/IOUArray forall (x :: (forall s e i . (Unboxed e,
HasDefaultValue e) = IOUArray i e)) . freeze
| x = freezeIOUArray x
| #-}
| 
| but is this rule really works? SPJ once mentioned that type checking
| is just impossible in current rules usage implementation
| 
| 
| --
| Best regards,
|  Bulat  mailto:[EMAIL PROTECTED]
| 
| ___
| Glasgow-haskell-users mailing list
| Glasgow-haskell-users@haskell.org
| http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: RULES pragma with class constraint

2006-03-20 Thread Simon Peyton-Jones
Definitely not at present, and I see no easy way to implement it. 

RULES are implemented by simple matching in Core.  A call to nub will have an 
Eq dictionary, but Core knows nothing of instance declarations, and has no clue 
how to make an Ord dictionary.  But an Ord dictionary is what you want on the 
RHS of the rule.

The type checker, on the other hand, knows about dictionary construction, but 
nothing about RULES.

Simon

| -Original Message-
| From: [EMAIL PROTECTED] [mailto:glasgow-haskell-users-
| [EMAIL PROTECTED] On Behalf Of John Meacham
| Sent: 20 March 2006 11:49
| To: glasgow-haskell-users@haskell.org
| Subject: RULES pragma with class constraint
| 
| Is it possible to create a RULES that fires only if a type has a given
| class constraint? something like:
| 
|  snub :: Ord a = [a] - [a]
|  snub xs = f Set.empty xs where
|  f _ [] = []
|  f (x:xs) set
|  | x `Set.member` set = f xs
|  | otherwise = x:f xs (Set.insert x set)
| 
| 
|  {-# RULES nub/snub Ord a = forall a xs .  nub (xs::[a]) = snub xs #-}
| 
| Such a rule would probably only be able to fire on monotypes with known
| instances since otherwise the dictionary won't necessarily be available.
| but it seems like it might be a useful feature even with that
| restriction.
| 
| John
| 
| --
| John Meacham - ⑆repetae.net⑆john⑈
| ___
| Glasgow-haskell-users mailing list
| Glasgow-haskell-users@haskell.org
| http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RULES pragma with class constraint

2006-03-20 Thread Bulat Ziganshin
Hello John,

Monday, March 20, 2006, 2:49:14 PM, you wrote:

JM Is it possible to create a RULES that fires only if a type has a given
JM class constraint? something like:

 snub :: Ord a = [a] - [a]
 snub xs = f Set.empty xs where
 f _ [] = []
 f (x:xs) set
 | x `Set.member` set = f xs
 | otherwise = x:f xs (Set.insert x set)


 {-# RULES nub/snub Ord a = forall a xs .  nub (xs::[a]) = snub xs #-}

the following compiles but i really never tested whether it works :)

{-# RULES
freeze/IOUArray forall (x :: (forall s e i . (Unboxed e, HasDefaultValue e) 
= IOUArray i e)) . freeze x = freezeIOUArray x
thaw/IOUArray   forall (x :: (forall   e i . (Unboxed e, HasDefaultValue e) 
=   UArray i e)) . thaw   x = thawIOUArray   x
unsafeFreeze/IOUArray forall (x :: (forall s e i . (Unboxed e, 
HasDefaultValue e) = IOUArray i e)) . unsafeFreeze x = unsafeFreezeIOUArray x
unsafeThaw/IOUArray   forall (x :: (forall   e i . (Unboxed e, 
HasDefaultValue e) =   UArray i e)) . unsafeThaw   x = unsafeThawIOUArray   x
#-}

see http://freearc.narod.ru/ArrayRef.tar.gz for the whole story


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RULES pragma with class constraint

2006-03-20 Thread John Meacham
On Mon, Mar 20, 2006 at 12:09:41PM -, Simon Peyton-Jones wrote:
 Definitely not at present, and I see no easy way to implement it. 
 
 RULES are implemented by simple matching in Core.  A call to nub will
 have an Eq dictionary, but Core knows nothing of instance
 declarations, and has no clue how to make an Ord dictionary.  But an
 Ord dictionary is what you want on the RHS of the rule.
 
 The type checker, on the other hand, knows about dictionary
 construction, but nothing about RULES.

I suspected this was the case. Perhaps it could be solved with something
like rule schemes, where a RULES pragma defines a template that is
instantiated as new types that meet certain criteria are introduced. so
a RULE with a class constraint would actually not be a single rule, but
be copied and instantiated every time a new instance is declared at the
new type.

Something like this might also be useful for writing rules that apply to
all built in C numeric types for instance..

not sure how much demand there is for something like this though. I have
a sort of hack in jhc to write rules that apply to any type that maps
directly to a C integral type for basic numeric rules. but it isn't user
extensible in any way. (other than by adding new primitive numeric types
that is)

John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Rules for use of unsafeThaw...

2005-11-04 Thread Simon Marlow
On 03 November 2005 17:08, Jan-Willem Maessen wrote:

 I've recently been experimenting with unsafeFreeze/unsafeThaw in
 GHC.  Judicious use of these functions vastly reduces GC overhead in
 Data.HashTable.  However, a slightly mis-timed GC will cause the
 whole mess to crash.  I am attempting to understand the invariants
 required to safely use unsafeFreeze/unsafeThaw.
 
 I believe the following usage ought to be 100% safe:
 
 1) Take the last and only reference to a mutable array, and call
 unsafeFreeze to obtain an immutable array.
 
 2) Take the last and only reference to an immutable array, and call
 unsafeThaw to obtain a mutable array.

I would like it to be the case that no usage of unsafe{Freeze,Thaw} can
crash the GC.  That perhaps isn't true right now, but I'd like to
understand why not.  

Multiple unsafeThaws on the same array might cause problems in 6.4.1
(but not the HEAD), due to the way the old generation mutable list
works.  That's fixable, if it is the problem.  Multiple unsafeFreezes
should be harmless.  Can you give me a test program?

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: RULES for SPECIALIZ(E)ations

2003-10-20 Thread Simon Peyton-Jones

| 
| I suppose I'm being bitten by User's Guide 7.8.2:
| 
|   If more than one rule matches a call, GHC will choose one
arbitrarily
| to apply.
| 
| even if a bit later it says:
| 
|   So a rule only matches if the types match too.
| 
| Am I understanding right and it's that so?

I think so.  After all, the types match for count/generic too.

Maybe you want a rule

genericLength = length

which will replace a call to (genericLength at type Int) by a call to
length?


You can also use phases to control order of rule application (albeit
it's crude).


http://www.haskell.org/ghc/docs/latest/html/users_guide/pragmas.html#PHA
SE-CONTROL

Simon


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: RULES/SPECIALIZE not parsing:

2002-12-04 Thread Simon Marlow

 Rules.hs:
 
 module Rules where
 
 my_id :: a - a
 my_id a = a
 
 my_int_id :: Int - Int
 my_int_id a = a
 
 {-# RULES my_id = my_int_id #-}

Each rule should begin with a string, like:

  {-# RULES my_id my_id = my_int_id #-}

 {-# SPECIALIZE my_id :: Int - Int = my_int_id #-}

These kind of specialise pragmas aren't supported any more, because the
same effect can be achieved using RULES (it looks like that's what you
were trying to do).

The docs have some bogusness, which I'll fix.

Cheers,
Simon

___
Glasgow-haskell-bugs mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs



RE: Quick question re RULES

2000-11-02 Thread Simon Peyton-Jones

| PrelBase contains the appended code.  Am I correct in
| assuming that the stuff is structured as it is, because the
| "map" rule first `breaks' the map `open', which exposes it
| to the various foldr/build rules, and if no foldr/build rule
| matches, the "mapList" rule `closes' it again in a later
| phase of optimisation - after build was inlined?  If so, it
| seems like the whole thing depends a bit on the timing of
| the various optimsations (the map might be closed again
| before any of the foldr/build rules fires).  Is this maybe
| the reason that build has an INLINE 2 pragma, which delays
| its inlining, and thus, the closing of the map?

That's exactly it.

| All very cunning, it seems ;-)

A bit too cunning for comfort.  I wish I could think of a better
way to expose map's impl using build without paying a price
for doing so if it turns out not to fuse with anything.  But so
far I've failed to do so.

Is this a propos of your new transformations for parallelism?

Simon

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



RE: Quick question re RULES

2000-11-02 Thread Manuel M. T. Chakravarty

Simon Peyton-Jones [EMAIL PROTECTED] wrote,

 Is this a propos of your new transformations for parallelism?

Precisely!  Trying to figure out how to generate the fastest
possible array code with GHC.

Manuel

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



RE: rules

1999-07-08 Thread Simon Peyton-Jones

You need to write the function in prefix form, thus:

{-# RULES "T"  forall x. (||) True x = True  #-}

I know this is stupid, but I havn't got around to fixing it.

Simon

 -Original Message-
 From: [EMAIL PROTECTED] 
 Sent: Thursday, July 08, 1999 2:46 PM
 To: [EMAIL PROTECTED]
 Subject: rules
 
 
 I am trying Rules with  ghc-CVS-June-end-1999:
 
   module T where
   {-# RULES "T"  forall x. (True || x) = True  #-}
 
   f = True || False
 
 ghc -c -O -fglasgow-exts  t.hs
 yields
 Rule T: Illegal left-hand side: (True || x)
 LHS must be of form (f e1 .. en) where f is not forall'd
  
 I could not invent any simplest Rule to pass the compilation.
 
 
 --
 Sergey Mechveliani
 [EMAIL PROTECTED]
 
 
 



Re: rules and bottom

1999-06-08 Thread Fergus Henderson

On 07-Jun-1999, S.D.Mechveliani [EMAIL PROTECTED] wrote:
 One more question on the program simplification and `bottom'.
 
 People say, that the transformations like   x - x - 0  :: Integer  
 are hardly ever applicable, because  x  may occur `undefined'.

This issue was already resolved -- someone already mentioned the solution.
But obviously you missed it, so let me restate the answer: instead of
using the transformation

x - x   == 0

you should use the transformation 

x - x   == x `seq` 0

 Probably, the compiler can prove "defined" very often.

Indeed.  It can then simplify things further, using the rule

x `seq` e   == e   if x is not _|_

-- 
Fergus Henderson [EMAIL PROTECTED]  |  "I have always known that the pursuit
WWW: http://www.cs.mu.oz.au/~fjh  |  of excellence is a lethal habit"
PGP: finger [EMAIL PROTECTED]| -- the last words of T. S. Garp.





Re: rules and bottom

1999-06-07 Thread J|rgen Gustavsson

On Mon, 7 Jun 1999, S.D.Mechveliani wrote:

 One more question on the program simplification and `bottom'.
 
 People say, that the transformations like   x - x - 0  :: Integer  
 
 are hardly ever applicable, because  x  may occur `undefined'.

There is another problem lurking here as well. Namely space issues.
Consider the following program. It runs in constant space.

let xs = 1..n
x = head xs
in x - x + last xs + x

Now transforming it using 
 M - M - 0 and
 0 + M - M
yields
 
let xs = 1..n
x = head xs
in last xs + x

which needs space proportional to n.

Regards,
 Jörgen Gustavsson.






Re: rules and bottom

1999-06-07 Thread Paul Hudak

 There is another problem lurking here as well. Namely space issues.
 Consider the following program. It runs in constant space.
 
 let xs = 1..n
 x = head xs
 in x - x + last xs + x
 
 Now transforming it using
  M - M - 0 and
  0 + M - M
 yields
 
 let xs = 1..n
 x = head xs
 in last xs + x
 
 which needs space proportional to n.

I don't think that this analysis is very "portable".  I.e. there is
nothing in the Haskell Report that says that arguments to strict
functions are evaluated left-to-right or in any other order.  Thus both
of these programs may run in constant or linear space, depending on what
the compiler decides to do.

IMHO: Haskell needs a formal operational semantics (more than it needs a
formal denotational semantics).

  -Paul





RE: rules for type casting

1999-05-17 Thread Simon Peyton-Jones

   {rules Num a= x::a, y::[a] == 
 x+y = [x]+y}
   instance Num a = Num [a] where ...
  one could expect for  x :: Num b=b  the casting
  x + [x,y] -- 
 [x] + [x,y]

  Provided the two sides of the rules have the same overall type,
  GHC will be happy.  But since there are no side condition, your
  rule will rewrite *every* x+y to [x]+y, which isn't what you 
  want I guess.
  [..]
 
 Why?  This rule contains the  x::a, y::[a]  condition,   
 so  1+1  does not fit the rule, and  1+[1]  does.


Hmm.  That's true.  My current matching algorithm doesn't compare
types when matching a pattern variable against a term, but perhaps
it should.  It *does* match types when they occur in type applications;
e.g.length [True,False] 
actually expands (after type checking) to
length Bool (True : False : nil)

The 'Bool' argument to length matches the 'forall' in length's type.
And the matching algorithm does match these type args.

Now I think about it, it is perhaps inconsistent not to match the
type of non-polymorphic variables too.  I'll think some more about
that one.

Simon





Re: rules for type casting

1999-05-16 Thread Otgonbayar Uuye

Simon Peyton-Jones wrote:
 
  Another question on *rules*.
  Could they help the implicit type casting?
  For example, with
{rules Num a= x::a, y::[a] == x+y = [x]+y}
instance Num a = Num [a] where ...
  one could expect for  x :: Num b=b  the casting
   x + [x,y] -- [x] + [x,y]
 
 
 Provided the two sides of the rules have the same overall type,
 GHC will be happy.  But since there are no side condition, your
 rule will rewrite *every* x+y to [x]+y, which isn't what you

every x+y that x::a  y::[a], in which case plain x+y doesn't make sense
anyway.

 want I guess.

[..]

 Simon

Cheers, Otgonbayar





RE: rules for type casting

1999-05-14 Thread Simon Peyton-Jones

 Another question on *rules*.
 Could they help the implicit type casting? 
 For example, with 
   {rules Num a= x::a, y::[a] == x+y = [x]+y}
   instance Num a = Num [a] where ...
 one could expect for  x :: Num b=b  the casting
  x + [x,y] -- [x] + [x,y]
 

Provided the two sides of the rules have the same overall type,
GHC will be happy.  But since there are no side condition, your
rule will rewrite *every* x+y to [x]+y, which isn't what you 
want I guess.

I rather doubt that rules will help you here.  Type casting
is the business of the type checker, not an optimiser.  The
latter takes a correct program and rewrites it to another correct program
(hopefully). 

Simon





RE: rules

1999-05-10 Thread Simon Peyton-Jones

Thanks to everyone who has contributed to the discussion about 
transformation rules.  There is clearly something inteeresting
going on here!

There is clearly a huge spectrum of possibilities, ranging from
nothing at all to a full theorem-proving system.  In adding rules
to GHC I'm trying to start with something very modest.  The rules
I've implemented so far operate solely by pattern matching, have
no side conditions, do not guarantee confluence or termination, and
must have a global constant (usually a function) in the head.  None of
this is particularly original, but I'm not aware of any optimising
compiler that provides such a facility.  I think of it as a way of
specifying domain-specific transformations that go with a library
you write.

I'm interested to see what can be done with this simple system before doing 
anything more elaborate.  One difficulty about going further is that it is 
easier to say the *kind* of thing one might like than to specify it
precisely.  My hope is that experience from a concrete implementation
may give us better ideas of what to do next.

Anyway, first I have to tidy up the implementation, document more precisely
what it does, and check it in so the brave among you can play with it.

Simon





RE: rules

1999-05-10 Thread Greg Michaelson

I think that John Darlington's group at Imperial College were to first
to use rule driven program transformation in their various skelton/coordination
language parallelising compilers.

Here, Tore Bratvold used simple higher order function/composition
distribution transformation rules in his parallelising SML to occam
compiler (SkelML). We're now building a more elaborate version
into our SML to C+MPI parallelising compiler. We're also investigating
the use of the CLAM proof planner to prove transformations correct.
We're interested in transformations that group/ungroup higher order
functions to try and optimise skeleton based parallelism. We're also
investigating program synthesis to "lift" higher order functions from
programs that don't contain them explicitly.

Greg Michaelson





Re: {-# rules

1999-05-09 Thread trb

Lennart Augustsson writes:
  Wolfram Kahl wrote:
  
   In the end, my dreams even include a proof format
   that can be checked by the compiler :-)
  
  Dependent types!  That's all you need.

Absolutely!

I have read your Cayenne paper

( http://www.cs.chalmers.se/%7Eaugustss/cayenne/paper.ps )

and in that light the proposals made in this thread seem clunky and lame.

Why kludge around with things like this if we can have dependent types ?

I ask the above question seriously, since I haven't seen Simon's original
post. Would it be hard to implement dependent types for ghc (keeping in mind
that the high payoff would justify some effort) ?

Tim





Re: {-# rules

1999-05-08 Thread Lennart Augustsson

Wolfram Kahl wrote:

 In the end, my dreams even include a proof format
 that can be checked by the compiler :-)

Dependent types!  That's all you need.

--

-- Lennart








Re: {-# rules

1999-05-04 Thread Hans Aberg

At 11:02 + 1999/05/03, Wolfram Kahl wrote:
With respect to the new RULES mechanism presented by
Simon Peyton Jones (Thu, 22 Apr 1999),
Carsten Schultz [EMAIL PROTECTED] writes
  [...]
   And what about algebraic simplification? Say,
  The same applies to our beloved monads.
  The compiler could be told about the monad laws.

Somebody else (Olaf Chitil?) already remarked that most ``Monads'' aren't.

So to be more consistent,
how about changing the current class Monad into
(motivation see signature ;-):

 class HasMonadOps m where
   return :: a - m a
   = :: m a - (a - m b) - m b
   ...

and introduce:

 class (HasMonadOps m) = Monad m where
 {-# RULES
  "Monad-rightId" forall f.f = return=  f
   "Monad-leftId"  forall f,x.  return x = f  =  f x
   ...
  #-}

Even if some still will not like this, would it be within
the scope of the current RULES mechanism?

For expressing algebraic relations, I think one can use universal algebra
by factoring through the free universal algebra of a particular set of
relations. For example, if one wants to state that a binary operator is
commutative, one can say that is should be defined on the symmetric set of
order two, that is the set of pairs (x, y) where (x, y) is equivalent to
(y, x). If one wants to say that an operator is associative (with
identity), then it factors through the free monoid, or list of the algebra
itself, so this amount to saying that it is defined on the set of lists.
And so on.

For monads, you might perhaps try doing the same thing, by introducing a
concept of "universal categories" and factorizing through them.

Otherwise, I think that Haskell need a better way to more exactly describe
what the object are. For example, a group G is formally a quadruple (G, e,
*, -1; R), where e is an identity, * multiplication, -1 an inverse, and R
some relations. Those details are wholly inexpressible in Haskell as
matters now stand. The same thing applies to monads, which are also known
as triples (plus relations). It is not possible to express in Haskell the
fact even that monads are triples, even less that they have some additional
monad relations to fulfill.

  Hans Aberg
  * Email: Hans Aberg mailto:[EMAIL PROTECTED]
  * Home Page: http://www.matematik.su.se/~haberg/
  * AMS member listing: http://www.ams.org/cml/







Re: {-# rules

1999-05-04 Thread peter



Hans aberg writes:

 
 For expressing algebraic relations, I think one can use universal algebra
 by factoring through the free universal algebra of a particular set of
 relations. For example, if one wants to state that a binary operator is
 commutative, one can say that is should be defined on the symmetric set of
 order two, that is the set of pairs (x, y) where (x, y) is equivalent to
 (y, x). If one wants to say that an operator is associative (with
 identity), then it factors through the free monoid, or list of the algebra
 itself, so this amount to saying that it is defined on the set of lists.
 And so on.
 
 For monads, you might perhaps try doing the same thing, by introducing a
 concept of "universal categories" and factorizing through them.
 
 Otherwise, I think that Haskell need a better way to more exactly describe
 what the object are. For example, a group G is formally a quadruple (G, e,
 *, -1; R), where e is an identity, * multiplication, -1 an inverse, and R
 some relations. Those details are wholly inexpressible in Haskell as
 matters now stand. The same thing applies to monads, which are also known
 as triples (plus relations). It is not possible to express in Haskell the
 fact even that monads are triples, even less that they have some additional
 monad relations to fulfill.
 

When a group is expressed as a class G having operations e, *, and -1,
then implicitly (via well defined semantics), a group is a quadruple,
so I don't think the quadruple need be explicit in the Haskell program.
The relation R is also implicit if properties are stated in the form
of equations concerning associativity and so on. I think providing a
way to express properties of the class is sufficient. While 
universal algebra is also a mathematical framework capable of
expressing the properties, for a programming language logical formulae
is a more natural way to go.


Jerzy Karczmarczuk writes

 Perhaps, (no, certainly), I have different mathematic sensibility than
 Hans Aberg, but what I *really need* from Haskell is the possibility to
 *code* something efficiently, and not just to represent it. 
 
 Saying that an operator is commutative does not help me, unless I have
 some ways to change the argument order to optimize the evaluation of
 a complex expression. This might be a dynamic problem, for example a
 Smalltalk object is (might be) able to resend automatically the message 
 to its argument, with itself as the argument (switching:
 receiver-arg);
 this enhances the possibility of forcing the type conversion.
 
 The associativity might be used to automatically optimize some recursive
 evaluations through embedding, for example transforming the recursive
 n! = n*(n-1)! into tail recursion. I know how to do that by hand.
 
 The monadic laws and some other theorems readable from the commutativity
 of some categorical diagrams were for me always a way to restructure the
 code in order to optimize it, for example (map f) . (map g) --=
 map (f . g) is a kind of deforestation. Anyway saying that the
 associativity
 means that the operator is defined on a set of lists doesn't make
 me happy at all. But I am a mathematical troglodyte, and a
 non-constructive
 elegance in programming has no sex-appeal for me...
 
 
 Jerzy Karczmarczuk
 Caen, France
 

I agree that non constructive definitions are not what I want.
I also agree that enabling better optimizations is a good motivation
for stating properties of an object. There is another motiviation for
stating properties which is my main motivation; this is to increase
the assurance that the software works correctly.

Cheers
Peter White, Motorola


- End Included Message -






RE: {-# rules

1999-05-04 Thread Sigbjorn Finne (Intl Vendor)


Mark P Jones [EMAIL PROTECTED] writes: 
 
...
 
 [Aside: As a general comment to all readers of the Haskell mailing
 list, perhaps I can suggest:  if you've posted something to the list
 within the last two weeks, and it hasn't received the kind of
 response that you were expecting, then please consider reposting,
 as many of us may have missed it the first time round.  Thanks!]
 

No, please don't - all messages have made it. The discussions you're
referring to occurred on the ghc-users list, before it was diverted
to this list.

A partial account can be found via 

  http://www.dcs.gla.ac.uk/mail-www/glasgow-haskell-users/

but that mail-www gateway doesn't pick up replies that just Cc: the
list (go figure). Let me know if you want the complete digest, and
I'll forward.

--sigbjorn





Re: {-# rules

1999-05-04 Thread Hans Aberg

At 10:08 -0700 1999/05/04, peter [EMAIL PROTECTED] wrote:
Hans aberg writes:
When a group is expressed as a class G having operations e, *, and -1,
then implicitly (via well defined semantics), a group is a quadruple,
so I don't think the quadruple need be explicit in the Haskell program.
The relation R is also implicit if properties are stated in the form
of equations concerning associativity and so on. I think providing a
way to express properties of the class is sufficient. While
universal algebra is also a mathematical framework capable of
expressing the properties, for a programming language logical formulae
is a more natural way to go.

It is sorry you do not give any motivations for your opinions, so that one
can follow the reasoning.

The thing is though that Haskell fixes the names of the symbols one can use
in say a group. So if one cannot express a group as a quadruple (G, e, *,
^-1), then there is no way from the concept of a group to derive the
concept of an Abelian group, and then by derivation saying that a ring is
an Abelian group under addition.

In fact this problem showed up in the posts a time ago where a fellow
wanted to make a library with the usual symbols + *. The suggestion for
solution was to rewrite the Prelude, otherwise it is not possible to do it
in Haskell.

In addition, if one cannot add the relations to the concept in any form, so
that a group then becomes (G, e, *, ^-1; R), where R is the relations, then
clearly there is no way the computer can catch the semantics no matter
what. And I thought this what was discussed in this thread.

Then one can think of different means of catching that semantics, universal
algebra, theorem provers or whatever.

So if one would want to eventually have a computer language with the
ability to catch some of the semantics used in math then those features are
surely necessary.

  Hans Aberg
  * Email: Hans Aberg mailto:[EMAIL PROTECTED]
  * Home Page: http://www.matematik.su.se/~haberg/
  * AMS member listing: http://www.ams.org/cml/







Re: {-# rules

1999-05-03 Thread peter


Wolfram Kahl writes:

 
 
 One might even imagine extended instance declarations
 like the following:
 
  instance Monad []
  {-# PROOF
   "Monad-rightId"
forall f.f = return
  =  concat (map return f)
  =  concat (map (:[]) f)
  =  case f of
   [] - concat []
   (x:xs) - concat ([x]:map (:[]) xs)
  =  case f of
   [] - []
   (x:xs) - x : concat (map (:[]) xs)
  =  f
   "Monad-leftId" ...
   ...
   #-}
 
 In the end, my dreams even include a proof format
 that can be checked by the compiler :-)
 
 
 
 Best regards,
 
 Wolfram
 

I enthusiastically support the ability to add properties to the class
declarations. Instead of putting proof texts in the program, I advocate
exporting the class declarations and the properties to a theorem prover.
The class declarations become definitions, and the properties become
proof obligations in the theorem prover. The export from Haskell could
be to some standard intermediate language, which is theorem prover
"agnostic". This intermediate language could be translated to ones
favorite theorem prover, be it HOL, PVS, COQ, ... It is likely that
different theorem provers would be useful for different properties.
Instead of the compiler checking the proofs, a configuration management
system could demand the proofs in some executable (checkable) format.
It is nice to be able to play with the program for a while without
proving it, to get the more mundane errors out before a proof is attempted.
I have found it to be a useless exercise to try and prove code that has
never been executed before.

Having this proof capability would greatly increase the value of Haskell
to my applications here at Motorola.

Cheers
Peter White, Motorola.





RE: {-# rules

1999-05-03 Thread Mark P Jones

I've seen a couple of messages now about Simon's proposal for
a RULES mechanism in Haskell, but it's clear that I've missed
several of the messages, including the original proposal.  I
suspect this is a result of recent changes in the way that the
list is handled, which should be resolved by now.  Unfortunately,
the missing articles are also not available from the archive at
haskell.org.

[Aside: As a general comment to all readers of the Haskell mailing
list, perhaps I can suggest:  if you've posted something to the list
within the last two weeks, and it hasn't received the kind of
response that you were expecting, then please consider reposting,
as many of us may have missed it the first time round.  Thanks!]

Returning to the main subject of this posting ... the idea of adding
rules to class declarations has been around for a long time.  In fact
the original Wadler and Blott paper that introduced type classes
(How to make ad-hoc polymorphism less ad-hoc) hints briefly at this
exact idea in its conclusions.  Later, in the closing section of my
own paper on Computing with Lattices (JFP 2, 4, 1992), I wrote about
this in a little more detail, and observed that extending the syntax
of Haskell to include rules would allow rules to be type checked and/or
fed as input to a proof checker.

Shortly after that, in a joint report with Luc Duponcheel on `Composing
Monads' (http://www.cse.ogi.edu/~mpj/pubs/composing.html), we used an
uninterpreted === operator to state the monad laws directly in Haskell
notation.  The trick here was to define:

   data Law a   -- Uninterpreted data type
   (===)   :: a - a - Law a
   x === y  = error "uncomputable equality"

Now we can state laws such as the following:

   mapId  :: Functor f = Law (f a - f a)
   mapId   = fmap id === id

   mapCompose :: Functor f = (b - c) - (a - b) - Law (f a - f c)
   mapCompose f g  = fmap f . fmap g === fmap (f . g)

Note that free variables in these laws are represented by variables on
the left hand side, with names like mapId and mapCompose serving as names
for each rule.  Because these are treated as normal Haskell definitions,
they are also subjected to the same process of type checking and type
inference.  I've written out explicit type signatures for these rules,
but they could also have been inferred from just the definition.  There
is still a distinct benefit in having a compiler type check laws like this,
even if you take them no further, for example as a hint to optimizers or
as input to proof checkers.  Note also that by embedding the laws in
Haskell, we get consistency in the type system ... Haskell has constructor
classes, and so this shows up in the laws: the above laws are intended to
hold for each instance of the "Functor" class, as reflected in the types.

Several observations:

 - It might be nice to hide the === operator and the law datatype inside
   a compiler so that they are truly uninterpreted.  Also, one might add
   a few additional operators and connectives.

 - It might be nice to use a syntax that distinguishes laws from regular
   function definitions.  If this is done, then it will be easier for a
   Haskell compiler will be able to identify laws as dead code, and for a
   Haskell--ProofChecker tool to identify the laws.  The syntax that I
   played with looked like the following:

 MapCompose f g  =  fmap f . fmap g === fmap (f . g)

   This doesn't cause any serious parsing conflicts ... it doesn't even
   require any new input tokens!

 - I think it is a mistake to use a syntax that embeds laws inside some
   special comment pragma notation.  Treating laws as proper language
   objects has several advantages, not least being the ability to name
   the laws, and control uses of that name (such as import or export from
   a module) using precisely the same mechanisms that we have in the rest of
   the language.

 - Incorporating proofs into a script means defining an interpretation for
   the Law datatype and associated operators.  One could imagine having
   several implementations of these, each targeted at a different prover
   and/or logic.  Lennart Augustsson has experimented with at least one
   approach to this.

 - Since I wrote about this in my Computing with Lattices paper, I have
   realized that it is a mistake to think about laws as being *part of*
   a class declaration.  Laws should be allowed anywhere that a normal
   declaration is permitted.  There are several reasons for this:

   * Laws are useful for values that are not overloaded, and hence
 do not get defined in a class declaration.

   * Most laws describe *interactions* between several operators, and not
 just the properties of one operator.  So the MapCompose law, for
 example, is about the interaction of fmap and (.) and could equally
 well be placed with the definition of (.), or even in a separate
 module of useful laws.

   * The values in a class declaration have top-level scope, and