Re: [Haskell-cafe] Justification for Ord inheriting from Eq?

2006-04-06 Thread Brian Hulley

John Meacham wrote:

On Thu, Apr 06, 2006 at 10:52:52PM +0100, Brian Hulley wrote:

[snip]
The problem of allowing classes (in Haskell) to inherit is that you
end up with heirarchies which fix the design according to some
criteria which may later turn out to be invalid, whereas if there
were no hierarchies then you could just use the particular classes
that are needed for the particular function, eg explicitly supplying
Eq and Ord instead of just Ord etc (though for a sort function Ord
by itself would be sufficient).


well, there are a few reasons you would want to use inheritance in
haskell, some good, some bad.

1. one really does logically derive from the other, Eq and Ord are
like this, the rules of Eq says it must be an equivalance relation
and that Ord defines a total order over that equivalance relation.
this is a good thing, as it lets you write code that depends on these
properties.


As Steve and Robert pointed out, you can't always rely on these properties 
(although it is debatable whether or not floats and doubles have any useful 
numeric properties in the first place).


Also, the use of Ord for sorting comes with extra baggage in the form of a 
total order whereas you might have just wanted to sort some values of a type 
where there is only a partial order. Thus the bundling of   = = 
together, with == being defined in terms of =  seems overly restrictive.


[rearranged]

the inflexability of the class hierarchy was my motivation for the
class aliases proposal.

http://repetae.net/john/recent/out/classalias.html


What about:

class Eq a where (==), (/=) :: ...
class PartialOrd a where
(), () :: a-a-Bool
x  y = y  x

class (PartialOrd a) = TotalOrd a where x = y = not (y  x) 
  -- = not meaning inheritance but just a restriction on a for use of 
TotalOrd


class alias Ord a = (Eq a, PartialOrd a, TotalOrd a)
  -- components of Ord all on the same level

Then sort could be declared as sort :: PartialOrd a = [a] - [a]

Changing the subject slightly, a minor problem (no doubt you've already 
noticed it) is that if you allow instance declarations for class aliases, 
there is a danger of overlapping instance definitions eg:


class Monad m where
   (=) :: ...

class alias AliasMonadFirst m (Monad m, First m)
class alias AliasMonadSecond m (Monad m, Second m)

instance AliasMonadFirst T where
   x = y = DEF1

instance AliasMonadSecond T where
x = y = DEF2

foo :: AliasMonadFirst a, AliasMonadSecond a = -- problem: conflicting 
Monad dictionaries for a==T


The presence of such overlapping instances might be invisible to the 
end-user of the aliases (since it depends on how the aliases are bound which 
is presumably usually hidden to allow later refactoring)


This problem doesn't arise at the moment since the instance declaration only 
allows the non-inherited (ie non-shared) part to be specified (so that foo 
:: MonadIO a, MonadPlus a = ... always uses the same definitions for Monad 
even though the identical definitions for Monad are duplicated in the 2 
dictionaries)




2. it is more efficient on dictionary passing implementations of
typeclasses. (does not apply to typecase based implementations like
jhc)

3. it is simpler to declare instances for, the default methods of Ord
can depend on Eq.


2) can be solved using aliases or whole program optimization
3) can be solved with aliases


1 is a very good reason.


Only if you are happy with  having to be a total order in every program 
that will ever be written :-)
(Though perhaps I am contradicted by the necessary relationship between 
TotalOrd and PartialOrd)


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Justification for Ord inheriting from Eq?

2006-04-06 Thread Brian Hulley

Brian Hulley wrote:

John Meacham wrote:

[snip]
1. one really does logically derive from the other, Eq and Ord are
like this, the rules of Eq says it must be an equivalance relation
and that Ord defines a total order over that equivalance relation.
this is a good thing, as it lets you write code that depends on these
properties.


As Steve and Robert pointed out, you can't always rely on these
properties (although it is debatable whether or not floats and
doubles have any useful numeric properties in the first place).


Actually I'm revising my idea about this. I think that Float and Double are 
just intrinsically dangerous numeric types which have members that don't 
satisfy the Eq and Ord equations so their presence doesn't contradict your 
argument in favour of hierarchical classes. It rather suggests that the 
existing hierarchy where Float and Double are instances of RealFloat which 
inherits (indirectly) from Ord is simply wrong, since Float and Double don't 
and can't obey the Ord equations.


Perhaps Float and Double should be moved to a class such as DangerousNum 
which does not inherit from Eq, Ord etc so that it would be clear they can't 
participate in any equational reasoning?


This would require different names for all DangerousNum ops or some way to 
qualify the name of a class member with the class name eg 3.0 DangerousNum£+ 
5.6 (dot can't be used because then DangerousNum would be assumed to be a 
module name)


Stephen Forrest wrote:

On 4/6/06, Brian Hulley [EMAIL PROTECTED] wrote:

What about:

class Eq a where (==), (/=) :: ...
class PartialOrd a where
 (), () :: a-a-Bool
 x  y = y  x

class (PartialOrd a) = TotalOrd a where x = y = not (y  x) 
   -- = not meaning inheritance but just a restriction on a for use
of TotalOrd


A partial order can be defined in either of two ways, both of which
require some notion of equality.  If it is a weak partial order, you
need to require reflexivity, i.e. x=y implies R(x,y).  If it is a
strong partial order, you need to require irreflexivity.  So some
notion of equality is necessary in either case.  (I think the same is
true of preorders, if we want to generalize to that.)

So, if such a PartialOrd existed, it really should be between Eq and
Ord in the class hierarchy.


It seems that there are indeed some hierarchies that are intrinsic and 
therefore I'll have to withdraw my attempt to argue in favour of a flat 
class system! :-)


Thanks to all who replied,

Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] show for functional types

2006-04-05 Thread Brian Hulley

Robert Dockins wrote:

On Apr 1, 2006, at 3:23 PM, Brian Hulley wrote:

[snip]
 For particular types T1 and T2, if (f (x::T1))::T2 === g x for
all x in T1 then f :: T1-T2 and g ::T1-T2 can be freely
substituted since the context T1-T2 cannot tell them apart.


Having thought about this a bit more, I realize that this statement
is also too strong.  In the lambda calculus, extensionality is
equivalent to the validity of eta-conversion (Plotkin, Call-by-value,
Call-by-name and the lambda calculus, 1975).  However, in Haskell,
eta-conversion is not valid (ie, meaning-preserving).  Observe:

f, g :: a - b
f = undefined
g = \x - undefined x

forall x::a, f x === g x === _|_.  However, 'seq' can tell them apart.

seq f 'a' === _|_
seq g 'a' === 'a'

So f and g are not replaceable in all term contexts (in particular,
not in 'seq' contexts).


I should not have used functions, since in any case for full generality rt 
is about allowing equivalent expressions to be substituted eg as in:


For a particular type T, if f::T === g then f::T and g::T can be freely 
substituted since the context T cannot tell them apart


This of course begs the question of how === is defined and so perhaps is not 
that useful.


If === must be defined extensionally then not all contexts in Haskell are 
referentially transparent since seq is referentially opaque, but this would 
render the whole notion of referential transparency useless for Haskell 
since there would be no way for a user of a library function to know whether 
or not the argument context(s) are transparent or opaque. At the moment I 
can't think of a non-extensional definition for === but there must be one 
around somewhere so that equational reasoning can be used.


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] show for functional types

2006-04-01 Thread Brian Hulley

Claus Reinke wrote:

the usual way to achieve this uses the overloading of Nums in Haskell:
when you write '1' or '1+2', the meaning of those expressions depends
on their types. in particular, the example above uses 'T Double', not
just 'Double'.


However there is nothing in the functions themselves that restricts their 
use to just T Double. Thus the functions can be compared for equality by 
supplying an argument of type T Double but used elsewhere in the program 
with args of type (plain) Double eg:


-- Change to module AbsNum
instance (Simplify a)=Eq (T a) where
   (==) (Const x) (Const y) = x == y
   (==) (Var x) (Var y) = x == y
   (==) (Add xs) (Add ys) = and (map (\(x, y) - x==y) (zip xs ys))
   (==) _ _ = False -- Not needed for the example

module Main where
import AbsNum

f x = x + 2.0
g x = x + 1.0 + 1.0

funeq :: (T Double - T Double) - (T Double - T Double) - Bool
funeq p q = let y = Var y in p y == q y

main = do
print (funeq f g)
print (f 10)
print (g 10)
putStrLn 
print (funeq f f)
print (f 10)
print (g 10)


main

False
12.0
12.0

True
12.0
12.0

Thus we can determine that f is implemented by different code from g (The 
example would be even more convincing if Int's were used instead of 
Double's) and so f and g are not interchangeable.


... nothing prevents us from defining that instance in such a way that we 
construct a

representaton instead of doing any additions.


Thus referential transparency of polymorphic functions is foiled.

Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] show for functional types

2006-04-01 Thread Brian Hulley

Brian Hulley wrote:

   (==) (Add xs) (Add ys) = and (map (\(x, y) - x==y) (zip xs ys))


What on earth was I thinking!!! ;-) Should be:

(==) (Add xs) (Add ys) = xs == ys

(Doesn't affect the validity of my argument though...)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] show for functional types

2006-04-01 Thread Brian Hulley

Robert Dockins wrote:

On Saturday 01 April 2006 11:53 am, Brian Hulley wrote:

Claus Reinke wrote:

the usual way to achieve this uses the overloading of Nums in
Haskell: when you write '1' or '1+2', the meaning of those
expressions depends on their types. in particular, the example
above uses 'T Double', not just 'Double'.


However there is nothing in the functions themselves that restricts
their use to just T Double. Thus the functions can be compared for
equality by supplying an argument of type T Double but used
elsewhere in the program with args of type (plain) Double eg:


Overloaded functions instantiated at different types are not (in
general) the same function.  If you mentally do the
dictionary-translation, you'll see why.

In particular for f, g :: XYZ a = a - b, and types n m such that
(XYZ n) and (XYZ m),

f :: (n - b) === g :: (n - b)

does *not* imply

f :: (m - b) === g :: (m - b)


That is where your argument falls down.


No, it doesn't, because that wasn't my argument. Consider:

f :: C a = a-a
g :: C a = a-a

Now if we can define just one instance of C, eg T1 where f (x::T1) \= g 
(x::T1), then we can tell f and g apart for all instances of C, even when 
there is another instance of C, eg T2, for which f (x::T2) == g (x::T2). 
Thus we can't just interchange the uses of f and g in the program because we 
can always use values of T1 to distinguish between uses of f :: T2 - T2 and 
g :: T2 - T2.


If f (x::T1) == g (x::T1) then nothing has been demonstrated, as you rightly 
point out, because there could be another instance of of C eg T3 for which f 
(x::T3) \= g(x::T3). It is the inequality that is the key to breaking 
referential transparency, because the discovery of an inequality implies 
different code.


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] show for functional types

2006-04-01 Thread Brian Hulley

Robert Dockins wrote:

[snip]
From an earlier post:


Now since f and g compute the same results for the same inputs,
anywhere in a program that you can use f you could just replace f
by g and the observable behaviour of the program would be
completely unaffected. This is what referential transparency means.


My essential claim is that the above statement is in error (but in a
fairly subtle way).


Ok I see now! :-) I was confusing the concept of referential transparency 
with a kind of global code equivalence, so the rest of my argument is 
irrelevant. Thus I should have said:


 For particular types T1 and T2, if (f (x::T1))::T2 === g x for all x in T1 
then f :: T1-T2 and g ::T1-T2 can be freely substituted since the context 
T1-T2 cannot tell them apart.


Thanks for pointing out the faulty definition,

Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] show for functional types

2006-04-01 Thread Brian Hulley

Claus Reinke wrote:

[snip]
... (try this for
one study of the many definitions [scanned paper - 3MB]:
http://www.dina.kvl.dk/~sestoft/papers/SondergaardSestoft1990.pdf ).


Thanks for the link,

Regards, Brian.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] show for functional types

2006-03-31 Thread Brian Hulley

Greg Buchholz wrote:

Neil Mitchell wrote:

Now lets define super show which takes a function, and prints its
code behind it, so:

superShow f = not
superShow g = \x - case ...

now superShow f /= superShow g, so they are no longer referentially
transparent.


   OK.  I'm probably being really dense today, but where did g come
from?  Is this is the internal definition of not?  And does this
loss of referential transparency contaminate the rest of the
language, or is this superShow just an anomaly?


Here is another example. Consider two functions f and g which, given the 
same inputs, always return the same outputs as each other such as:


 f x = x + 2
 g x = x + 1 + 1

Now since f and g compute the same results for the same inputs, anywhere in 
a program that you can use f you could just replace f by g and the 
observable behaviour of the program would be completely unaffected. This is 
what referential transparency means.


However, if you allowed a function such as superShow, superShow f == x + 2 
and superShow g == x + 1 + 1 so superShow f /= superShow g thus you could 
no longer just use f and g interchangeably, since these expressions have 
different results.


Thus the existence of superShow would contaminate everything, because if you 
were using a library function for example you would have no way of knowing 
whether the writer of the function had used superShow somewhere deep in its 
implementation ie referential transparency is an all or nothing concept.


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Wrapping the IO monad to get safe, self-describing imperative APIs

2006-03-30 Thread Brian Hulley

Hi -
In a discussion started on the GHC mailing list 
http://www.haskell.org//pipermail/glasgow-haskell-users/2006-March/009923.html I 
discovered an idea for typing imperative API functions that may be of 
interest to other people, and which makes use of Haskell's type system to 
achieve a level of self-description and static bad-usage detection 
impossible in C/C++ APIs.


A nicely formatted (by trac) description of the advantages and how to write 
such APIs is available at http://hackage.haskell.org/trac/ghc/ticket/736 (a 
feature request I submitted to the GHC team to make it easier to write such 
APIs) and the plain text is included at the end of this post.


The basic idea is that different monads, which are just newtypes of the IO 
monad, can be used to prevent API functions being called in the wrong 
context. For example, consider the following C function which implements a 
render callback using a simplified version of DirectX to draw a square on 
the screen:


 void Render(int width, int height){
 Clear();
 BeginScene();
 DrawSquare();
 EndScene();
 }

In the C code, the fact that DrawSquare() can only be called between 
Begin/EndScene, and the fact that Clear(), BeginScene(), EndScene() can only 
be called in a render callback (as opposed to a keypress callback for 
example) are completely implicit, and must be borne in mind by the user who 
has to wade through heaps of documentation to guess at this understanding.


By using different monads in Haskell, the above function could be written as 
follows:


 newtype RenderM a = RenderM (IO a) deriving (Functor, Monad, MonadIO)
 newtype DrawM a = DrawM (IO a) deriving (Functor, Monad, MonadIO)

 type RenderCallback = Int - Int - RenderM ()

 onRender :: RenderCallback - IO ()

 clear :: RenderM ()
 scene :: DrawM () - RenderM ()
 drawSquare :: DrawM ()

 render :: RenderCallback
 render w h = do
clear
scene $ do
  drawSquare

making it impossible to call drawSquare in any situation that the API did 
not intend, and also making it easy to see that in order to draw something, 
the drawing will need to be an argument of some function which makes use of 
DrawM () eg scene, which in turn needs to be an arg of some function which 
takes a RenderM () thus leading from inside out: drawSquare -- scene -- 
RenderCallback -- onRender.


A consequence of all this is that it would be necessary (to completey 
enforce API correct usage) to have an extra optional entry point for Haskell 
programs, to prevent APIs being re-started by lifting their init functions 
into a callback monad (more details at the end of this post and the trac 
report).


Regards, Brian.

#736: Allowing any newtype of the IO monad to be used in FFI and extra 
optional

entry point
+---
   Reporter:  [EMAIL PROTECTED]  |Owner:
   Type:  feature request  |   Status:  new
   Priority:  normal   |Milestone:
  Component:  Compiler |  Version:  6.4.1
   Severity:  normal   | Keywords:  FFI foreign monad entry 
point

 Os:  Multiple |   Difficulty:  Unknown
Architecture:  Multiple |
+---
Hi -
When designing an API it is desirable to be able to encode the correct
usage patterns for functions in the API in the type of the functions
themselves, rather than relying on the user understanding the
documentation and having to use runtime checks to ensure correct usage.
Consider the following callback which uses a typical C API (DirectX) to
draw something to the screen:
{{{
 void Render(int width, int height){
 Clear();
 BeginScene();
 DrawSquare();
 EndScene();
 }
}}}

In Haskell with the FFI at present, we can define an equivalent API and
use it as follows:
{{{
 type RenderCallback = Int - Int - IO ()

 clear :: IO ()
 scene :: IO () - IO ()
 drawSquare :: IO ()

 onRender :: RenderCallback - IO ()
 runGraphicsWindow :: IO () - IO ()

 render :: RenderCallback
 render w h = do
clear
scene $ do
  drawSquare

 main = runGraphicsWindow $ do
  onRender render
}}}

This is all very well, but just like the C equivalent, it doesn't encode
the fact that drawSquare can only be called between BeginScene and
EndScene. For example the following render callback would result in a
runtime error or at least an unexpected result for the user:
{{{
 badRender w h = drawSquare
}}}

To allow the type checker to enforce correct usage, we can use different
monads which just wrap the IO monad as follows:
{{{
 newtype RenderM a = RenderM (IO a) deriving (Functor, 

Re: Pragmatic concurrency Re: [Haskell-cafe] multiple computations, same input

2006-03-29 Thread Brian Hulley

Robin Green wrote:

On Wed, 29 Mar 2006 12:50:02 +0100
Jon Fairbairn [EMAIL PROTECTED] wrote:

[snip]
1) choosing the optimal reduction strategy is undecidable

2) we shouldn't (in general) attempt to do undecidable
   things automatically
[snip]

[snip]
I suggest that a Haskell program should be treated as an executable
specification. In some cases the compiler can't optimise the program
well enough, so we (by which I mean, ordinary programmers, not
compiler geeks) should be able to explicitly provide our own
optimisations, as rewrite rules (generalised ones, or specialised
ones for individual functions). Democratise the means of automated
optimisation!


This sounds good. The only thing I'm wondering is what do we actually gain 
by using Haskell in the first place instead of just a strict language? It 
seems that Haskell's lazyness gives a succinct but too inefficient program 
which then needs extra code in the form of rewrite rules/pragmas, or else a 
complete rewrite in terms of seq etc to get it to run fast enough without 
space leaks...


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Pragmatic concurrency Re: [Haskell-cafe] multiple computations, same input

2006-03-29 Thread Brian Hulley

Brian Hulley wrote:

Robin Green wrote:

On Wed, 29 Mar 2006 12:50:02 +0100
Jon Fairbairn [EMAIL PROTECTED] wrote:

[snip]
1) choosing the optimal reduction strategy is undecidable

2) we shouldn't (in general) attempt to do undecidable
   things automatically
[snip]

[snip]
I suggest that a Haskell program should be treated as an executable
specification. In some cases the compiler can't optimise the program
well enough, so we (by which I mean, ordinary programmers, not
compiler geeks) should be able to explicitly provide our own
optimisations, as rewrite rules (generalised ones, or specialised
ones for individual functions). Democratise the means of automated
optimisation!


This sounds good. The only thing I'm wondering is what do we actually
gain by using Haskell in the first place instead of just a strict
language? It seems that Haskell's lazyness gives a succinct but too
inefficient program which then needs extra code in the form of
rewrite rules/pragmas, or else a complete rewrite in terms of seq etc
to get it to run fast enough without space leaks...


Thinking about this some more, I realised Jon had already answered this 
question in his 3rd point:


On Wed, 29 Mar 2006 12:50:02 +0100
Jon Fairbairn [EMAIL PROTECTED] wrote:
 3) Separation of concerns: Pragmatic decisions about
evaluation order should be kept separate from the
denotational aspect of the code. By this token, seq

I wonder if there could be a really large repository of rewrite rules on the 
web somewhere, with heuristics to determine various strategies for applying 
them.


There would also need to be some automated way of proving correctness of 
rewrite rules, so that if someone submitted a new one it would be sure not 
to introduce bugs into the optimization.


In this way, the Haskell community could gradually chip away at the 
undecidableness of automatically optimizing Haskell programs, because it may 
turn out to be the case that most functions are members of a very small 
subset of the possible Haskell functions and could thus be handled by a 
finite set of rewrite rules.


Regards, Brian.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: how would this be done? type classes?existentialtypes?

2006-03-23 Thread Brian Hulley

Ben Rudiak-Gould wrote:

Brian Hulley wrote:

Is there a reason for using  instead of

  [exists a. Resource a=a]

?


Only that = looks like a function arrow,  looks like a tuple. I
stole this notation from an unpublished paper by SimonPJ et al on
adding existential quantification to Haskell. I'm not especially
attached to it. Actually I rather like

forall a | Resource a. a
exists a | Resource a. a




The bar is certainly consistent with the use in guards etc, but would lead 
to:


 exists a | (Show a, Eq a) . a

which looks a bit clunky because of the need for () as well because of the 
comma (to limit the scope of the comma). Also, it might be confusing to have 
to use a different notation to qualify type variables just because these 
type variables are being existentially qualified, when = is used everywhere 
else.

Personally I'd get rid of = altogether, and enclose constraints in {} eg

 foo :: forall a {Resource a} a  -- dot is optional after }
 bar :: {Show a, Eq a} a-Bool
 [exists a {Resource a} a]
 class {Monad m} MonadIO m where ...

This would fit into the rest of the syntax for Haskell as it stands at the 
moment.


[snip]


It is tricky, though. E.g. given (f (g z)) where

   f :: forall a. [a] - Int
   g :: String - (exists b. [b])

in principle you should be able to call g first, getting a type b and
a list [b], then instantiate f with the type b, then pass the list to
it, ultimately getting an Int. But I don't know how to design a type
inference algorithm that will find this typing. If on the other hand

   f :: (exists a. [a]) - Int

then it's easy to do the right thing---which is a little odd since
these two types for f are otherwise indistinguishable.


If the two types for f are indistinguishable, perhaps the forall in f's type 
could be converted into an existential as a kind of normal form before doing 
type inference?





Hope I'm not making this more confusing but I'm still trying to get
my head around all these invisible happenings regarding dictionaries
so I can visualise what's happening in terms of bytes and pointers
in the runtime


Once you understand where the types go in System F, the dictionaries
are easy: they always follow the types around. Any time you have

forall a b c. (C1 a b, C2 b c) = ...

in the source, you have five corresponding parameters/arguments in
GHC Core, one for each type variable and constraint. These are always
passed around as a unit (at least prior to optimization). In
principle you could box them in a 5-tuple. The dictionary values are
nothing more or less than proofs that the corresponding constraints
hold.


Thanks, this helps a lot,

Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Question regarding let clauses

2006-03-09 Thread Brian Hulley

Christian Maeder wrote:

Martin Percossi wrote:

matMul a b = do { let foo = 2*5; return a }


probably
 { let {foo = 2*5}; return a }
will work (untested)

your ; indicates a further let-equation, but the possibility to use
; without { and } is a bit pathologic (and haddock used to
reject it)


I think this kind of example just proves my numerous arguments that the 
layout rule should be changed to only allow ';' after an *explicit* opening 
brace...


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: MUA written in Haskell (was: Getting GHC to print Done when it's finished linking?)

2006-03-08 Thread Brian Hulley

Nils Anders Danielsson wrote:

On Tue, 07 Mar 2006, Brian Hulley [EMAIL PROTECTED] wrote:

(Moved from ghc-users.)


Brian Hulley wrote:



(time for a proper email client to be written in Haskell! ;-) )


I had the same thought yesterday, after an Emacs-Lisp session in which
I was trying to get Gnus to do exactly what I wanted it to...

Out of curiosity, how much work would it take to write an easily
configurable, decent MUA in Haskell? I don't know too much about MUAs,
but I have a feeling that we already have quite a few libraries that
are needed for the job: UIs (including HTML rendering...), plugins,
various protocols, encryption...


I'm afraid I don't know much about MUAs either.
I see there's some stuff in Network.* that may be useful...

Unfortunately I don't have time at the moment to try implementing one, but 
for what it's worth here are some thoughts I had on what an ideal email 
client, suitable for Haskell programmers, would be like:


1) Plain text based to avoid problems with viruses etc getting in via HTML. 
HTML emails received could just be displayed as plain text (including all 
markup)


2) What-you-see-is-exactly-what-will-be-sent for editing, so that when you 
press send you don't need to worry about the text being all mangled up by 
wrapping/replacement of characters etc


3) When you click reply or reply all, the original text should be 
indented with '' (at the moment OE requires QuoteFix to achieve this 
trivial but essential functionality)


4) An API could be exposed then the user could write scripts to put things 
into correct folders etc.
The API could provide info about what is currently waiting on the server, 
and the ability to download or delete without downloading (eg for big 
attachments that are suspected of being viruses)


5) Ideally the scripting language would be Haskell. There is already stuff 
in Language.Haskell.* for doing parsing but I can't find anything which 
would allow you to compile and load functions into a running program.


So I'd imagine that the email client would contain a plain text editor that 
wrapped text as it is edited (if wrapping is needed nowadays anyway?) with 
email addresses and URLs in the text clickable; a tree of folders and a 
folder contents window which could display the emails by date/subject/name/ 
or thread; and an API and way of loading scripts written in Haskell to do 
all the complicated automatic stuff - which would now be completely under 
the user's control.


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] | vs. $ (was: request for code review)

2006-03-07 Thread Brian Hulley

Brian Hulley wrote:

translate :: (Monad m) = String - m String
translate = do
  createParseContext
  readToFirstIdentifier
  dealWithDeclarator
  consolidateOutput


The type signature above doesn't match the do block. It would either have to 
be changed to something like:


translate :: Control.Monad.State.MonadState String m = m ()

(storing the string in the monad's state instead of using a monad which 
returns it) or the do block could be replaced with the = operator as 
below, to thread the returned string between the components of the pipe:


translate :: Monad m = String - m String
translate x =
 return x =
 createParseContext =
 readToFirstIdentifier =
dealWithDeclarator =
consolidateOutput

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] | vs. $ (was: request for code review)

2006-03-07 Thread Brian Hulley

Shannon -jj Behrens wrote:

I did think of using a monad, but being relatively new to Haskell, I
was confused about a few things.  Let's start by looking at one of my
simpler functions:

-- Keep pushing tokens until we hit an identifier.
pushUntilIdentifier :: ParseContextTransformation
pushUntilIdentifier ctx
  | currTokType ctx == Identifier = ctx
  | otherwise =
  let newStack = (currTok ctx) : (stack ctx) in
(ctx {stack=newStack}) |
getToken |
pushUntilIdentifier

The function itself is a ParseContextTransformation.  It takes a
context, transforms it, and returns it.  Most of the pipelines in the
whole application are ParseContextTransformations, and the | (or $ or
.) are ways of tying them together.  My questions concerning Monads
are in this example are:

1. Monads apply a strategy to computation.  For instance, the list
monad applies the strategy, Try it with each of my members.  What
part of my code is the strategy?


In the pipe in the 'otherwise' branch, at the moment you have to assume that 
each of the transformations can successfully be done. What happens if 
getToken can't get a token because there are no more tokens left?
To solve this problem you could use a monad such as Maybe, to encapsulate 
the strategy keep going as long as no problems have been encountered so 
far eg:


type ParseContextTransformation = ParseContext - Maybe ParseContext

pushUntilIdentifier :: ParseContextTransformation
pushUntilIdentifier ctx
  | currTokType ctx == Identifier = Just ctx
  | otherwise =
 let newStack = (currTok ctx) : (stack ctx) in
   return  ctx{stack=newStack} =
   getToken =
   pushUntilIdentifier

-- Read the next token into currTok.
getToken :: ParseContextTransformation
getToken ctx@(ParseContext {input=s}) =
 let lstrip s = dropWhile isSpace s
 in case lexString (lstrip s) of
 (Just  token, theRest) - Just (ctx{currTok=token, input = 
theRest})

 _ - Nothing

lexString :: String - (Maybe Token, String)
lexString s@(c:cs) | isAlphaNum c =
 let (tokString, theRest) = span isAlphaNum s
 token = classifyString tokString in
(Just token, theRest)
lexString ('*':cs) = (Just $ classifyString *, cs)
lexString (c:cs) = (Just $ classifyString (c:[]), cs)
lexString [] = (Nothing, [])  -- can now deal with this case

lexString is itself a candidate for a monadic computation on a state monad 
where the state is the string and Maybe Token is the return type, but it 
depends on how much you want to monadify your code...




2. Monads are containers that wrap a value.  For instance, the Maybe
monad can wrap any value, or it can wrap no value and just be Nothing.
 What part of my code is the thing being wrapped, and what part is
extra data stored in the Monad itself?

So I guess:

3. Is the ParseContext the monad or the thing being wrapped?


Using the Maybe monad as above, it is the monad's return type. For any 
monad m, m a means the monad m returning a value of type a so Maybe 
ParseContext means a Maybe monad returning a value of type ParseContext. I 
think stored in the monad itself would usually refer to the case where you 
use some sort of state monad where the ParseContext would be the state but 
AFAIK this wouldn't be the most natural way to structure this sort of 
application.




4. How do I divide the code between the functions on the right side of

= and the functions in the monad itself?  The functions on the right

side of = operate on the stuff inside the monad, and the functions
in the monad itself operate on the stuff in the monad.


Using the Maybe monad you could access the result by:

toplevel :: String - IO ()
toplevel s = case translate s of
   Just s' - putStrLn s'
   Nothing - putStrLn Error translating

where translate and each of its component functions are changed to return 
their results via the Maybe monad.




5. How does the ParseContextTransformation relate?


I just modified ParseContextTransformation so that the resulting 
ParseContext is returned via the Maybe monad to allow for failure in any of 
the transformation steps. You'd need to also change createParseContext to 
return Maybe ParseContext etc.


There are more advanced ways of using monads, eg where you use Monad m = 
instead of hardcoding the Maybe monad into the result, but it probably makes 
more sense to understand monads using concrete examples first. The tutorials 
give more info on these advanced monadic ways (and are certainly far better 
than me at explaining them).


Hope this helps,
Brian.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Layout rule (was Re: PrefixMap: code reviewrequest)

2006-03-06 Thread Brian Hulley

Malcolm Wallace wrote:

Brian Hulley wrote:

However I think there is an error in the description of this in
section 2.7  of the Haskell98 report, which states:

If the indentation of the non-brace lexeme immediately following a
where,  let, do or of is less than or equal to the current
indentation level, then  instead of starting a layout, an empty list
{} is inserted, and layout  processing occurs for the current
level ...

I dispute the or equal in the above statement, since it seems to be
clearly in contradiction to what is actually being done.


Section 2.7 does say that it is an informal description, so although
it is correct, it is not complete.  In the case of the module header,
the question is really what is the current indentation level? (that
we must be strictly greater than).  The answer can be found in the
formal definition of the layout rule in section 9.3.  At the
beginning of the module, there is _no_ current indentation level -
thus the fourth equation of L applies.


Thanks. However I do think the fact that there is a special case for the 
module head would merit a mention in section 2.7, because at the moment it's 
a bit like looking at a stack of chocolate cookies and defining the top one 
to be vanilla - it works but who'd ever have thought of it for themselves 
just looking at the visual indentation on the screen?


On the subject of 9.3, I'm puzzled by:
For the purposes of the layout rule, Unicode characters in a source program 
are considered to be of the same, fixed, width as an ASCII character. 
However, to avoid visual confusion, programmers should avoid writing 
programs in which the meaning of implicit layout depends on the width of 
non-space characters.


Surely almost all Haskell programs rely on the width of every non-space 
character to be the same as the width of a space (ie monospaced font where 
one character == one glyph) as in


   let a = 3
b = 5

I'd suggest that the word non-space should be replaced by multi-glyph 
and perhaps there could be a recommendation to avoid the use of multi-glyph 
characters in the first place (otherwise an editor would have to be smart 
enough to maintain the correct multi-glyph spaces in the columns under 
them...)


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Comparing programs

2006-03-06 Thread Brian Hulley

Harry Chesley wrote:

But here's the thing that makes it hard (at least for me): two

programs are considered the same if they can be made to match by
rearranging the order of the input parameters. I.e., f(a), g(b) is
the same as f(b), g(a). Although parameters can be reordered, they
cannot be substituted (f(a), g(b) is _not_ the same as f(a),
g(a)).


Are you sure you've got this right? I'd have thought that rearranging the 
order of input parameters just means that if you have h(a,b) == h'(b,a) for 
some h and h' then you are allowed to substitute h' for h as long as you 
also swap the params around.


f(b),g(a) can only be obtained from f(a),g(b) by substituting the params for 
f and g respectively, which is not the same as rearranging the order of f 
and g's params, and which as you've said, is not allowed.


Also, what is the meaning of f(a), g(b)? If f and g are just side 
effect-free functions, is this not equivalent to just g(b)?


I'd suggest the first step is to define a formal semantics for the meaning 
of the programs to help illuminate what is/isn't equivalent, then think 
about rewrite rules/ search strategies/heuristics etc, and only at the very 
end try to work out how to code it in Haskell. (Of course the problem of 
comparing two functions in a TM-complete language is undecidable so unless 
your functional language is very restricted no such algorithm will work for 
every input)


Regards, Brian.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] | vs. $ (was: request for code review)

2006-03-06 Thread Brian Hulley

Shannon -jj Behrens wrote:

I find ctx | currTok | tokenType to be more readable than
tokenType $ currTok $ ctx because you're not reading the code in
reverse.  That's my primary complaint with . and $.  That's
especially the case when I'm spreading the code over multiple lines:

-- Translate a C type declaration into English.
translate :: String - String
translate s =
  s |
  createParseContext |
  readToFirstIdentifier |
  dealWithDeclarator |
  consolidateOutput



If you were wanting to be able to deal with exceptions/ errors etc during 
the translation process, you'd usually use a monad (Haskell's version of a 
pipe), in which case the operations could still be read left to right eg:


translate :: (Monad m) = String - m String
translate = do
  createParseContext
  readToFirstIdentifier
  dealWithDeclarator
  consolidateOutput

So while . and $ are useful for combining functions together in parts of a 
program, and are consistent with the idea that the function always comes 
first followed by its argument, the top level (at least) would usually be 
monadic and hence not require | to get left-to-right readability. I've 
copied the links on monads below from a previous post by Jared:


  http://www.nomaware.com/monads/html/
  http://haskell.org/hawiki/Monad


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] request for code review

2006-03-05 Thread Brian Hulley

Neil Mitchell wrote:

stackTop ctx =
  let (x:xs) = stack ctx in x

stackTop ctx = head ctx


stackTop ParseContext{stack=x:_} = x

or:

stackTop ctx = head (stack  ctx)

===stackTop ctx = head . stack $ ctx
===stackTop = head . stack

Regards, Brian.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Layout rule (was Re: PrefixMap: code reviewrequest)

2006-03-04 Thread Brian Hulley

Daniel Fischer wrote:

Am Freitag, 3. März 2006 19:21 schrieb Brian Hulley:

Brian Hulley wrote:

Brian Hulley wrote:

[snip]

AFAICT, the description in the report is correct, *except for the
'where' in module LayOut where*.

[snip]

So my guess is that layout-processing is applied only to the
module-body, not to the module head and probably that should be
mentioned in the report.


Thanks - that's quite a relief because my incremental parser absolutely 
relies on the indentation of a child block to be more than that of it's 
parent in the AST...


Perhaps a future incarnation of Haskell could just omit the keyword where 
in the module head to avoid all this confusion.


Also, all the tutorials (and book) I've read only mention the layout rule in 
passing somewhere deep inside the text and usually give a rather 
unsatisfactory hand-waving description that omits to mention the special 
case for where in the module head and/or the need for the sub-block to be 
indented more than the parent block, so I think depending on what tutorials 
people have read, putting this together with the module where, a lot of 
confusion is floating about...


Perhaps a wiki page is indicated?

Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Layout rule (was Re: PrefixMap: code reviewrequest)

2006-03-03 Thread Brian Hulley

Brian Hulley wrote:

Brian Hulley wrote:
One other thing I've been wanting to ask (not to change! :-)) for a
while is: how is the following acceptable according to the rules in
the Haskell98 report where where is one of the lexemes, which when
followed by a line more indented than the line the
layout-starting-lexeme is on, should start an implicit block:

  module M where
  data T = .-- not indented!

According to my understanding of the layout algorithm, the above code
would have to be written:

  module M where
 data T = 

Can anyone shed some light on what the formal rule is that allows the
first (and very useful) way of laying out code to be ok?


The solution (as someone pointed out to me in an email) is that the layout 
block only *finishes* when the current indentation is *less* than the 
indentation of the lines in the layout block (rather than *starting* only 
when the current indentation is *more* than the indentation of the line 
containing the where etc).


However I think there is an error in the description of this in section 2.7 
of the Haskell98 report, which states:


If the indentation of the non-brace lexeme immediately following a where, 
let, do or of is less than or equal to the current indentation level, then 
instead of starting a layout, an empty list {} is inserted, and layout 
processing occurs for the current level ...


I dispute the or equal in the above statement, since it seems to be 
clearly in contradiction to what is actually being done.


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Layout rule (was Re: PrefixMap: code reviewrequest)

2006-03-02 Thread Brian Hulley

Brian Hulley wrote:
[snip]

So any solutions welcome :-)


Thank to everyone who replied to my queries about this whole layout issue.

One other thing I've been wanting to ask (not to change! :-)) for a while 
is: how is the following acceptable according to the rules in the Haskell98 
report where where is one of the lexemes, which when followed by a line 
more indented than the line the layout-starting-lexeme is on, should start 
an implicit block:


  module M where
  data T = .-- not indented!

According to my understanding of the layout algorithm, the above code would 
have to be written:


  module M where
 data T = 

Can anyone shed some light on what the formal rule is that allows the first 
(and very useful) way of laying out code to be ok?


Thanks, Brian.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Layout rule (was Re: PrefixMap: code reviewrequest)

2006-03-01 Thread Brian Hulley

Benjamin Franksen wrote:

[snip]
I am used to hitting TAB key and get the correct number of spaces,
according to how I configured my editor (NEdit) for the current
language mode.


The only thing then is what happens when you type backspace or left arrow to 
get back out to a previous indentation? If the TAB character inserts spaces, 
there's no problem going from left to right but it would seem more 
complicated to go back out again ie without having to type backspace 4 times 
and try to hope when outdenting more that I haven't typed backspace 23 times 
instead of 24 times by mistake thus not getting to the column I expected.


This is my only reason for wanting to keep tab characters in the text, and 
certainly it does give some disadvantages when trying to line up '|' '=' etc 
vertically - at the moment I admit my layouts do end up a bit contrived as I 
have to use more newlines to ensure I can use tabs only to accomplish the 
line-up...


So any solutions welcome :-)

Regards, Brian.




... flee from the Hall of Learning. This Hall is dangerous in its
perfidious beauty, is needed but for thy probation. Beware, Lanoo, lest
dazzled by illusive radiance thy soul should linger and be caught in its
deceptive light.
  -- Voice of the Silence
stanza 33

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: PrefixMap: code review request

2006-02-28 Thread Brian Hulley

Ben Rudiak-Gould wrote:

Brian Hulley wrote:

Whoever thought up the original Haskell layout rule assumed that
people would be happy using a single fixed width font, tabs set to 8
spaces, and didn't care about the brittleness of the code (in the
face of identifier renamings) it allowed one to write.


Are you complaining that Haskell permits you to write code with these
problems, or that it requires you to? The latter is not true. Instead

[snip]

Just that it allows you to, because this means other people's code (which 
you may be editing) can be brittle.



If you have a different layout rule in mind I'd be interested in
hearing it, but I think Haskell's is quite good overall.


Here is my proposed layout rule:

1) All layout keywords (where, of, let, do) must either be followed by a 
single element of the corresponding block type, and explicit block 
introduced by '{', or a layout block whose first line starts on the *next* 
line and whose indentation is accomplished *only* by tabs


In particular, this allows:

 let a = 56 in a*a

and

 let
   a = 56
   b = 78
 in a*b

but not

 let a = 56
  b = 78

or

 let a = 56; b = 78
  c = 90

I would also make it that explicit braces are not allowed to switch off the 
layout rule (ie they can be used within a layout), multiline strings would 
not be permitted, and multiline comments would not be permitted (pragmas 
could easily be used just by using --#) (I'd have a special keyword eg 
'{}module' instead of 'module' at the top of a file to switch off layout for 
the whole file if required, but making the presence of the layout rule 
depend on whether or not there are surrounding braces makes life *way* too 
complicated imho)


This would give the following advantages:

1) When you see a ';' you could immediately tell which block it belongs to 
by looking backwards till the next '{'


2) Variable width fonts can be used, or different font faces to represent 
different sorts of identifier eg class names, tycons, value constructors, 
operators like `seq` as opposed to seq etc


3) Using only tabs ensures that vertical alignment goes to the same position 
on the screen regardless of the font and tabs could even have different 
widths just like in a wordprocessor


4) Any keypress has a localised effect on the parse tree of the buffer as a 
whole ( {  no longer kill everything which follows and there would be no 
{- )


5) It paves the way for a much more immersive editing environment, but I 
can't say more about this at the moment because I haven't finished writing 
it yet and it will be a commercial product :-)))


Using my self-imposed layout rule I'm currently editing all my Haskell code 
in a standard text editor using tabs set to 4 spaces and a variable width 
font and have no problems.


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Layout rule (was Re: PrefixMap: code review request)

2006-02-28 Thread Brian Hulley

Ben Rudiak-Gould wrote:

Brian Hulley wrote:

Here is my proposed layout rule:

1) All layout keywords (where, of, let, do) must either be followed
by a single element of the corresponding block type, and explicit
block introduced by '{', or a layout block whose first line starts
on the *next* line


I wouldn't have much trouble adapting to that.


and whose indentation is accomplished *only* by tabs


You can't be serious. This would cause far more problems than the
current rule.


Why? Surely typing one tab is better than having to hit the spacebar 4 (or 
8) times?





I would also make it that explicit braces are not allowed to switch
off the layout rule (ie they can be used within a layout),


I don't understand. What does used within a layout mean?


I meant that {;} would be used just like any other construct that has to 
respect the layout rule so you could write


  let
   a = let { b = 6; z = 77;
   h = 99;
  p = 100} in b+z+h + p

etc but not:

  let
   a = let { b = 6; z = 77;
   h = 99;  -- this binding would be part of the outermost 
'let'

  p = 100} in b+z+h + p




multiline strings would not be permitted,


They aren't now, except with \ escapes. A stray  will be caught on
the same line unless the line happens to end with \ and the next line
happens to begin with \, which is exceedingly unusual.


and multiline comments would not be permitted
(pragmas could easily be used just by using --#)


But --# doesn't introduce a comment. And this would make UNPACK
pragmas rather inconvenient to use.


-- # but I hadn't thought about UNPACK...
The motivation in both points is to make it easy for an editor to determine 
which lines need to be re-parsed based on the number of leading tabs alone.





1) When you see a ';' you could immediately tell which block it
belongs to by looking backwards till the next '{'


I guess that might be helpful, but it doesn't seem easier than
looking left to the beginning of the current line and then up to the first
less-indented line.


There was an example posted on another thread where someone had got into 
confusion by using ; after a let binding in a do construct with an explicit 
brace after the 'do' but not after the 'let' (sorry I can't find it again). 
Also the current layout rule uses the notion of an implicit opening brace 
which is a to be regarded as a real opening brace as far as ';' in concerned 
but an unreal non-existent opening brace as far as '}' is concerned. Thus I 
think it is a real mix-up.





2) Variable width fonts can be used,


They can be used now, if you adhere to a certain style, but not
everyone likes that style. I wrote in C++ with a variable width font and 
tabs

at one time, but eventually went back to fixed width. One reason was
that I couldn't use comment layout conventions that tend (in my 
experience)

to improve readability more than monospacing hurts it. Another reason
was that glyph widths appropriate to natural languages didn't work
all that well for source code. Spaces are much more important in
source code than in natural language, for example. A proportional
font designed for source code would be nice, but I haven't found one
yet. Stroustrup used a mixture of proportional and monospaced glyphs
in _The C++ Programming Language_ and it worked well.

or different font faces to
represent different sorts of identifier eg class names, tycons, value
constructors, operators like `seq` as opposed to seq etc


Lots of editors do this with monospaced fonts; I think it's
orthogonal to the layout issue.


For example on Windows Trebuchet MS is a very nice font, also Verdana, both 
of which are not monospaced. But yes I agree it's not a major issue and I 
just see the option of being able to use them as a nice side-effect.





3) Using only tabs ensures that vertical alignment goes to the same
position on the screen regardless of the font and tabs could even
have different widths just like in a wordprocessor


Requiring tabs is a really bad idea. Just forget it. Seriously.


I'm really puzled here. I've been using tabs to indent my C++ code for at 
least 10 years and don't see the problem. The only problem would be if 
someone mixed tabs with spaces. Since it has to be either tabs only or 
spaces only I'd choose tabs only to save keystrokes. I suppose though it is 
always going to be a matter of personal taste...





4) Any keypress has a localised effect on the parse tree of the
buffer as a whole ( {  no longer kill everything which follows and
there would be no {- )


I don't understand why this is an advantage. If you have an editor
that highlights comments in green, then large sections of the program
will flash green while you type a {- -} comment, which might be
annoying, but it also means you'll never forget to close the comment,
so the practical benefit of forbidding {- -}, as opposed to simply
not typing it yourself

Re: [Haskell-cafe] PrefixMap: code review request

2006-02-27 Thread Brian Hulley

David F.Place wrote:
[snip]

partList :: Ord k = [([k],v)]-[k]-[(k,[([k],v)])]
partList pairs alphabet = reverse . fst $ foldl' f ([],pairs) alphabet
where f (result,pairs) l = (result',rest)
  where (part,rest) = span ((==l) . head . fst) pairs
result' = if null part
 then result
 else (l,part):result


If the above code is put into a non-literate file as:

partList :: Ord k = [([k],v)]-[k]-[(k,[([k],v)])]
partList pairs alphabet = reverse . fst $ foldl' f ([],pairs) alphabet
where f (result,pairs) l = (result',rest)
  where (part,rest) = span ((==l) . head . fst) pairs
result' = if null part
 then result
 else (l,part):result

there is a parse error (using ghc) at the line beginning with result'. This 
binding doesn't line up with anything. Also the second 'where' is 
dangerously close to the column started by the 'f' after the first 'where' 
(may not be noticeable in this email due to whatever font it is being 
displayed in but it's only indented by one space) which makes it a bit 
tricky to read.


I suggest:

partList :: Ord k = [([k],v)]-[k]-[(k,[([k],v)])]
partList pairs alphabet = reverse . fst $ foldl' f ([],pairs) alphabet
   where
  f (result,pairs) l = (result',rest)
 where
(part,rest) = span ((==l) . head . fst) pairs
result' = if null part
 then result
 else (l,part):result

or:

partList :: Ord k = [([k],v)]-[k]-[(k,[([k],v)])]
partList pairs alphabet = reverse . fst $ foldl' f ([],pairs) alphabet
   where
  f (result,pairs) l = (result',rest)
 where
(part,rest) = span ((==l) . head . fst) pairs
result' =
   if null part
  then result
  else (l,part):result

because always starting an 'if' construct on a new line ensures that you 
will never ever have any problems with it's layout (especially helpful for 
people used to C) when you use it in a 'do' block.


Also, both the above variants ensure that your code can be edited using 
variable width fonts, any tabbing regime you like (as long as leading 
whitespace only has tabs and never spaces), and will be immune to identifier 
renamings. The golden rule for 'safe' layout is always to start a layout 
block on the next line after 'do' 'where' 'of' 'let' and to try to resist 
the tempation to save a line by squashing the first binding onto the same 
line as the 'let' etc.


The second variant has the added advantage that the horizontal indentation 
is kept under control (eg in the above all indenting is 3 spaces further in) 
whereas when people write things like


if p == q then let a = 4
 b = if a ==4 then let q = 2
s = 56

after only 3 indents the code is already half way across the screen (not to 
mention the fact that the above layout is completely brittle and can be 
destroyed by a simple search-and-replace op to change 'q' to 'hello')


Of course all the above is just easy-to-talk-about technicalities of layout 
and you were really interested in getting feedback about idiomatic usage - 
hopefully someone else will give some feedback about that (I'm too lazy...) 
:-)


Regards, Brian.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] PrefixMap: code review request

2006-02-27 Thread Brian Hulley

David F. Place wrote:

On Feb 27, 2006, at 5:54 PM, Brian Hulley wrote:


there is a parse error (using ghc) at the line beginning with
result'. This binding doesn't line up with anything. Also the
second 'where' is dangerously close to the column started by the
'f' after the first 'where' (may not be noticeable in this email
due to whatever font it is being displayed in but it's only
indented by one space) which makes it a bit tricky to read.


Whoops, that's noise in the transmission.  In my original file and my
original email, it is indented correctly.  As for other indentation
issues, I always use whatever emacs suggests.  Is that not a good
strategy?


I always think it's a bit like income tax! Over the centuries the rules have 
got more and more complicated and instead of simplifying everything, 
computers have allowed all this mess to survive which ultimately makes life 
difficult for us poor humans because no-one knows any more what's going on 
(except a computer which has its own agenda for humanity...)


Whoever thought up the original Haskell layout rule assumed that people 
would be happy using a single fixed width font, tabs set to 8 spaces, and 
didn't care about the brittleness of the code (in the face of identifier 
renamings) it allowed one to write. A like-minded person has then created an 
emacs mode which supports this. It is probably also possible to create an 
emacs macro to safely rename identifiers by first parsing the code, doing 
the renaming in the AST, then pretty-printing the code back into the file.


However if you voluntarily use only a restricted subset of all possible 
layouts, you end up with non-brittle code that can be edited in any editor 
(eg someone else's editor who doesn't understand emacs:-) ) and where safely 
renaming identifiers is just a simple text-based search-and-replace 
operation.


Of course having said all this, many people have strong personal views about 
different ways of laying out code, whether to use tabs or not, etc etc.


I was just using your example as an excuse for some consciousness-raising 
about safe vs brittle layouts, but of course at the end of the day everyone 
has to decide for themselves. For example you might find that the aesthetics 
or readability of a 'brittle' layout, or the ease of editing using the 
particular emacs mode, outweighs the disadvantages of its being brittle.


As Dr Flox said on Star Trek Enterprise to T'Pol Infinite diversity in 
infinite combinations !!! :-)


Best regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] rounding errors with real numbers.

2006-02-26 Thread Brian Hulley

Matthias Fischmann wrote:

| -- fix rounding error:
| repair [i] = [upper]
| repair (h:t) = h : repair t


Just to point out that this only fixes the last element of the list, so 
inputs like [1,2,10.8,10.8] would not be handled properly if you require the 
same input values to map to the same output values (I assume such inputs 
don't arise in the context you're using but in a general context the above 
wouldn't be a solution).


Another thing is that when using floating point numbers, is there really any 
difference between 1.0 and 0.999 anyway? It's usually not recommended to 
ever test floats for equality since, depending on the architecture, the 
same float can end up being represented differently depending on what 
optimizations are happening eg an implementation could conceivably be making 
use of two different fp units if values are passed between different 
concurrently executing threads in a multi-processor or distributed 
processing environment...


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Context in data and class declarations (was haskell programming guidelines)

2006-02-25 Thread Brian Hulley

Hi -
In 
http://www.informatik.uni-bremen.de/agbkb/forschung/formal_methods/CoFI/hets/src-distribution/versions/HetCATS/docs/Programming-Guidelines.txt 
one of the recommendations states:


Don't put class constraints on a data type, constraints belong only to the 
functions that manipulate the data.


This made me think: what on earth was the point of allowing contexts on a 
data declaration in the first place? I've always found it a confusing 
feature of Haskell since you in any case need to repeat all the (relevant) 
constraints when declaring the type of any function that manipulates the 
data. There does not appear to be any sensible reason for requiring the 
constraints for constructor functions, because the constructor functions 
would never ever need to use them... Therefore would it be a good idea to 
get rid of this feature altogether?


Another confusing thing is the use of the word inheritance in 
tutorials/books about class declarations. Unlike object oriented languages, 
where a class or interface gets all the methods of its ancestor 
classes/interfaces in addition to some new methods declared at that level, 
each Haskell type class is completely independent of any other type class. 
For example, the class Ord contains methods for () (=) (=) () max min 
but does not contain the methods of Eq even though this confusing word 
inheritance or superclass would imply that it should. Ord does *not* 
inherit anything at all - the meaning of the Eq context in the class 
declaration is just that we will need the Eq dictionary in addition to the 
Ord dictionary when calling any of Ord's methods.


Thus I propose that the contexts don't really have any place in a class 
declaration either ie


  class Eq a = Ord a where
  (), (=), ... ::

would be better written as:

class Ord a where
  (), (=),  :: Eq a = a-a-Bool

so the language wouldn't be so confusing to learn. Classes are after all 
extremely simple! :-)


Regards, Brian.





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Context in data and class declarations (was haskellprogramming guidelines)

2006-02-25 Thread Brian Hulley

Brian Hulley wrote:

snip
Another confusing thing is the use of the word inheritance in
tutorials/books about class declarations. Unlike object oriented
languages, where a class or interface gets all the methods of its
ancestor classes/interfaces in addition to some new methods declared
at that level, each Haskell type class is completely independent of
any other type class. For example, the class Ord contains methods for
() (=) (=) () max min but does not contain the methods of Eq even
though this confusing word inheritance or superclass would imply
that it should. Ord does *not* inherit anything at all - the
meaning of the Eq context in the class declaration is just that we
will need the Eq dictionary in addition to the Ord dictionary when
calling any of Ord's methods.
Thus I propose that the contexts don't really have any place in a
class declaration either ie

  class Eq a = Ord a where
  (), (=), ... ::

would be better written as:

class Ord a where
  (), (=),  :: Eq a = a-a-Bool

so the language wouldn't be so confusing to learn. Classes are after
all extremely simple! :-)


Sorry folks I've got all this completely wrong! ;-)
It seems the Ord dictionary does in fact contain the Eq dictionary since the 
following examples compile:


test :: Ord a = a-a-Bool
test x y = x = y

test2 :: Ord a = a-a-Bool
test2 x y = x == y

I think what confused me was that although the Ord dictionary contains the 
methods for Eq, you cannot override the Eq methods in an instance 
declaration for Ord - you have to use two instance declarations to achieve 
this, one for Ord and one for Eq, so as far as instance declarations are 
concerned, the type classes are separate, but the dictionaries are not.


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type inference

2006-02-09 Thread Brian Hulley

Cale Gibbard wrote:

On 09/02/06, Brian Hulley [EMAIL PROTECTED] wrote:

Brian Hulley wrote:

f :: forall m. (forall a. a-m a) - c - d - (m c, m d)


Of course this type doesn't work on your original example, since (,)
is a type constructor with two parameters, not one, but this type
signature does actually work just fine in GHC.

---
data Pair x = Pair x x deriving Show
data Id x = Id x deriving Show

f :: forall m c d. (forall a. a - m a) - c - d - (m c, m d)
f g x y = (g x, g y)
---
*Main f (\x - Pair x x) 3 hello
(Pair 3 3,Pair hello hello)
*Main f (\x - Id x) 3 hello
(Id 3,Id hello)


Perhaps in practice this kind of workaround is sufficient, although this 
requires you to put the results into a form that can be syntactically 
matched against m a.


I wonder though if it would be possible to have a type system that would 
work for my original example by automatically creating internal declarations 
type T1 a = (a,a)  and type T2 a = a so that m would match against T1 and T2 
respectively.


It is difficult to know if this would be a worthwhile thing to do or if most 
practical applications of polymorphism in any case involve transformations 
whose shape can be captured sufficiently with multi-param typeclasses etc.


Regards, Brian


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type inference

2006-02-08 Thread Brian Hulley

Fred Hosch wrote:

Is type inferencing in Haskell essentially the same as in SML? Thanks.


Well, that depends on what you mean by essentially the same ;-)

Both languages are based on the same Hindley-Milner type inference 
algorithm, so both suffer from the same problem that a function such as


  f g x y = (g x, g y)

can't be given a very satisfactory type (ie there exist perfectly good 
programs that will be rejected by both SML and Haskell due to limitations 
inherent in the kinds (excuse the pun) of type system that can be used with 
HM type inference)


However, Haskell has a lot of advanced features that are bolted on to this 
foundation, which SML doesn't. One such feature is arbitrary rank 
polymorphism, which allows you to use a function argument in more than one 
way within a function, for example (compile with ghc -fglasgow-exts):


 h :: (forall a. a-a) - b - c - (b,c)
 h g x y = (g x, g y)  -- the argument g is used polymorphically

This function can't be written in SML. Note that although h is similar to f, 
there would not exist a type for h if g could be an arbitrary function ie 
a-d instead of a-a.


Of course SML's types are a bit different - they use tuples to introduce a 
notion of dimensionality and they don't have any higher order types.


SML also has a complicated thing called the value restriction because it 
allows mutable references to be altered via side effects. Because Haskell 
has no side effects, there is no need for a value restriction.


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type inference

2006-02-08 Thread Brian Hulley

Cale Gibbard wrote:

On 08/02/06, Brian Hulley [EMAIL PROTECTED] wrote:

Fred Hosch wrote:

Is type inferencing in Haskell essentially the same as in SML?
Thanks.


Well, that depends on what you mean by essentially the same ;-)

Both languages are based on the same Hindley-Milner type inference
algorithm, so both suffer from the same problem that a function such
as

   f g x y = (g x, g y)

can't be given a very satisfactory type (ie there exist perfectly
good programs that will be rejected by both SML and Haskell due to
limitations inherent in the kinds (excuse the pun) of type system
that can be used with HM type inference)

However, Haskell has a lot of advanced features that are bolted on
to this foundation, which SML doesn't. One such feature is arbitrary
rank polymorphism, which allows you to use a function argument in
more than one way within a function, for example (compile with ghc
-fglasgow-exts):

  h :: (forall a. a-a) - b - c - (b,c)
  h g x y = (g x, g y)  -- the argument g is used polymorphically

This function can't be written in SML. Note that although h is
similar to f, there would not exist a type for h if g could be an
arbitrary function ie a-d instead of a-a.


The type forall a d. a - d isn't terribly useful, since there just
aren't many functions of that type. You can give a type signature
like:

f :: (forall a b. a - b) - c - d - (e,f)
f g x y = (g x, g y)

but good luck actually applying the function in a useful way :) The
only thing you can reasonably pass as g (barring the existence of
functions which completely break the type system) is bottom.

So what is it that you're looking for here? I'm not sure I understand.


For example, you can't have:

   f g x y = (g x, g y)

  a = f (\x - (x,x)) 3 hello -- example 1

  b = f (\x - x) 5 bye  -- example 2

because there is no way to express the relationship between the arguments 
x,y and the results g x, g y without fixing down the shape of g's result in 
the definition of f.


If Haskell used intersection types instead of system F (I hope I've got this 
right) then you could write:


 f :: (a-b  c-d) - a - c - (b,d)

or

 f :: (forall a m. a - m a) - c - d - (m c, m d)

where the intention is that m would match (,) in example 1 and the 
identity type constructor (I in type I a=a) in example 2.


Regards, Brian.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type inference

2006-02-08 Thread Brian Hulley

Brian Hulley wrote:

 f :: (forall a m. a - m a) - c - d - (m c, m d)



The above is wrong - there is no way to quantify m properly. This must be 
why intersection types need to be written with  after all





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type inference

2006-02-08 Thread Brian Hulley

Brian Hulley wrote:

Brian Hulley wrote:

 f :: (forall a m. a - m a) - c - d - (m c, m d)



The above is wrong - there is no way to quantify m properly. This
must be why intersection types need to be written with  after
all


What am I saying! It's right after all, and might be better than the  
syntax because it makes the dependency clearer (assuming you can't write a 
function that is both Int-String  and Float-Int) 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type inference

2006-02-08 Thread Brian Hulley

Brian Hulley wrote:

Brian Hulley wrote:

Brian Hulley wrote:

 f :: (forall a m. a - m a) - c - d - (m c, m d)



The above is wrong - there is no way to quantify m properly. This
must be why intersection types need to be written with  after
all


What am I saying! It's right after all, and might be better than the 
syntax because it makes the dependency clearer (assuming you can't
write a function that is both Int-String  and Float-Int)


Last correction!

f :: forall m. (forall a. a-m a) - c - d - (m c, m d)

Sorry for all this confusion.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: extending bang proposal Re: strict Haskelldialect

2006-02-07 Thread Brian Hulley

Ben Rudiak-Gould wrote:

Brian Hulley wrote:

One motivation seems to be that in the absence of whole program
optimization, the strictness annotations on a function's type can
allow the compiler to avoid creating thunks at the call site for
cross-module calls whereas using seq in the function body itself
means that the thunk still has to be created at the call site
because the compiler can't possibly know that it's going to be
immediately evaluated by seq.


GHC solves this with the worker-wrapper transformation: the code for
the wrapper is exported as part of the module's interface and inlined
at external call sites. It handles seq, unboxing, and so on and calls
the worker via a private interface.

Not that I think strictness information in the type system is a bad
idea.


Sounds cool. I wonder if strictness annotations are ever really needed (eg 
if the perfect optimizer were possible to construct)?


One problem I see with strictness annotations in functions is that there 
could easily be an exponential growth in variants of a function (and callers 
of that function...) to handle different strictness requirements (eg for map 
with all of Clean's list types etc) leading to a bit of a minefield for the 
humble programmer... :-)


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: extending bang proposal Re: strict Haskelldialect

2006-02-06 Thread Brian Hulley

Ben Rudiak-Gould wrote:


As Robert Dockins said, it's not implemented, and it isn't clear how
to implement it. At this point it's looking fairly likely that my PhD
thesis will be on this very topic, so stay tuned.


Isn't all this already implemented in Clean?

Regards, Brian.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-05 Thread Brian Hulley

Jon Fairbairn wrote:

Brian Hulley wrote:

snip


Not exactly alone; I've felt it was wrong ever since we
argued about it for the first version of Haskell. : for
typing is closer to common mathematical notation.

But it's far too late to change it now.


- it's just syntax after all


Well I'm reconsidering my position that it's just syntax. Syntax does 
after all carry a lot of semiotics for us humans, and if there are centuries 
of use of : in mathematics that are just to be discarded because someone 
in some other language decided to use it for list cons then I think it makes 
sense to correct this.


It would be impossible to get everything right first time, and I think the 
Haskell committee did a very good job with Haskell, but just as there can be 
bugs in a program, so there can also be bugs in a language design, and an 
interesting question is how these can be addressed.


For example, in the Prolog news group several years ago, there was also a 
discussion about changing the list cons operator, because Prolog currently 
uses . which is much more useful for forming composite names - something 
which I also think has become a de-facto inter-language standard. Although 
there was much resistance from certain quarters, several implementations of 
Prolog had in fact changed their list cons operator (list cons is hardly 
ever needed in Prolog due to the [Head|Tail] sugar) to reclaim the dot for 
its proper use.


My final suggestion if anyone is interested is as follows:

1) Use : for types
2) Use , instead of ; in the block syntax so that all brace blocks can 
be replaced by layout if desired (including record blocks)
3) Use ; for list cons. ; is already used for forming lists in natural 
language, and has the added advantage that (on my keyboard at least) you 
don't even need to press the shift key! ;-)


Regards, Brian.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-05 Thread Brian Hulley

Tomasz Zielonka wrote:

The only problem I see right now is related to change locality. If I
have a chain like this:

   f x y .
   g x $
   z

and I want to add some transformation between g and z I have to
change one line and insert another

   f x y .
   g x .
   h x y $
   z

With right-associative $ it would be only one line-add. Probably not a
very strong argument.


How about:

 f x y
 . g x
 $ z

then you only need to add the line

 . h x y

This is similar to how people often format lists:

a =
 [ first
 , second
 , third
 ]

Regards, Brian.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-05 Thread Brian Hulley

Tomasz Zielonka wrote:

On Sun, Feb 05, 2006 at 01:14:42PM -, Brian Hulley wrote:

How about:

 f x y
 . g x
 $ z

then you only need to add the line

 . h x y


But then you have a problem when you when you want to add something
at the beginning ;-) With right-assoc $ adding at both ends is OK.


This is similar to how people often format lists:

a =
 [ first
 , second
 , third
 ]


I am one of those people, and I am slightly annoyed with I have to
add something at the beginning of the list. I even went so far that
when I had a list of lists, which were concatenated, I've put an
empty list at front:

   concat $
   [ []
   , [...]
   , [...]
   .
   .
   .
   ]


Just in case you are interested, in the preprocessor I'm writing, I would 
write these examples as:


   (.) #
  f x y
  g x
  h x y
   $ z

and
a = #[
   first
   second
   third

where exp # {e0,e1,...} is sugar for let a = exp in a e0 (a e1 (a ... ) 
...)) and #[ {e0, e1, ... } is sugar for [e0, e1, ...](exp # 
block and exp # block are the right and left associative versions 
respectively and the special # sugar allows a layout block to be started if 
it occurs at the end of a line)


This allows me to avoid having to type lots of syntax eg repeating the . 
all the time and focus on the semantics...


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-05 Thread Brian Hulley

Brian Hulley wrote:

Brian Hulley wrote:

Robin Green wrote:

snip
So simply make strictness the default and have laziness annotations
(for arguments), instead of making laziness the default and having
strictness annotations.


Where would you put these laziness annotations?
If you put them in the function declaration eg as in:

if' :: Bool - ~a - ~a - a   [corrected]

presumably you'd want the compiler to pass the args as thunks instead
of evaluated values. However this means that all args to every
function would have to be passed as thunks, even though for strict
functions these thunks would immediately be evaluated. The problem is
that there is no way for the compiler to optimize out the thunk
creation / evaluation step because it occurs across the black box
of a function call, thus we wouldn't get the same efficiency as in a
language such as ML where no thunks are created in the first place.


I'm just s slow!!! ;-) Of course the laziness info would now be
part of the function's type so the compiler would be able to generate
the correct code to prepare thunks or evaluated values before calling
the function. So your idea of laziness annotations for args would
give the best of both worlds :-)


For an eager language, a state monad could perhaps be defined by

 data ST m a = ST ~(m - (m,a))

and the other operations would work as normal without any additional 
annotations. (?)


I must admit I'm a bit confused as to why the strictness annotations in 
Haskell (and Clean) are only allowed in data declarations and not function 
declarations, since it seems a bit random to have to guess which args can be 
evaluated strictly at the call site although it of course gives flexibility 
(eg to use (+) strictly or lazily). The type system doesn't prevent someone 
from writing () m0 $! m1 even though the author of () may have been 
relying on m1 being lazily evaluated... (?)


For an eager language, it would seem that lazy annotations would have to be 
allowed as part of a function's type so that if' could be implemented. Does 
anyone know of a type system that incorporates lazy annotations, and/or how 
these would be propagated?


What would the signature of a lazy map function be?

map :: (~a - ~b) - ~[a] - ~[b]
map :: (a - b) - ~[~a~] - ~[b~]

   etc etc - quite a puzzle!!!

Thanks, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-05 Thread Brian Hulley

Bulat Ziganshin wrote:

Hello Brian,

Saturday, February 04, 2006, 4:50:44 AM, you wrote:


One question is how to get some kind of do notation that would
work well in a strict setting.
The existing do notation makes use of lazyness in so far as the
second arg of   is only evaluated when needed. Perhaps a new
keyword such as go could be used to use = instead ie:



If strictness was the default (eg if the language were ML not
Haskell), then in



 putStr hello  putStr (show 1)



both args to  would be evaluated before  was called. Thus
putStr (show 1) would be evaluated before the combined monad is
actually run, which would be wasteful if we were using a monad with
a  function that only runs the rhs conditionally on the result of
the lhs.
If Haskell were a strict language I think an equivalent for the do
notation would have to lift everything (except the first expression)
and use = instead of  .


it seems that you misunderstand the monads (or may be i misunderstand
:)

each and every monadic operation is a function! type IO a is really
RealWorld - (RealWorld,a) and the same for any other monad. concept
of the monad by itself means carrying hidden state from one monadic
operation to the next. that allows to _order_ monadic operations plus
this state used for zillions other things, including state, logs,
fails and so on, so on


exp1  exp2 in a strict setting would force exp1 to be evaluated to a 
monad, exp2 to be evaluated to a monad, then these monads to be combined 
using  into another monad, which at some later point would actually be 
run. But it is this eager evaluation of exp2 into the rhs monad that is the 
problem, because in the example above, (show 1) would be evaluated during 
the evaluation of (putStr hello  putStr (show 1)) whereas in Haskell it 
would only be evaluated when the combined monad is actually run (because it 
is only at this point that Haskell actually creates the combined monad from 
the thunk).


Regards, Brian.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-05 Thread Brian Hulley

Tomasz Zielonka wrote:

On Sun, Feb 05, 2006 at 01:10:24PM -, Brian Hulley wrote:

2) Use , instead of ; in the block syntax so that all brace
blocks can be replaced by layout if desired (including record blocks)


Wouldn't it be better to use ; instead of , also for record syntax?


I thought of this also, but the nice thing about using commas everywhere is 
that it is consistent with tuples and lists:


   [a,b,c]
   (a,b,c)
   {a,b,c}

I admit it takes some getting used to to write:

   map f (h;t) = f h;map f t

but you couldn't use commas in tuple syntax if they were also used as list 
cons.


Also, I'm using

   test :{Eq a, Show a} a - ()

instead of

   test :: (Eq a, Show a) = a-()

and the comma here is particularly nice because it suggests a set, which is 
exactly what the context is.


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-05 Thread Brian Hulley

Tomasz Zielonka wrote:

On Sun, Feb 05, 2006 at 05:18:55PM -, Brian Hulley wrote:

I must admit I'm a bit confused as to why the strictness annotations
in Haskell (and Clean) are only allowed in data declarations and not
function declarations


Clean does allow strictness annotations in function types.


Thanks for pointing this out - I must admit I had only taken a very quick 
look at Clean (I was overwhelmed by the complicated type system) but now 
I've found the place in the Clean book that describes strictness annotations 
for function types so I must look into this a bit more.


If I wanted to write a 3d computer game in Haskell (or Clean), would lazy 
evaluation with strictness annotations lead to as fast a program as eager 
evaluation with lazy annotations for the same amount of programming effort? 
And would the result be as fast as an equivalent program in C++ or OCaml or 
MLton? If so, there would obviously be no point wasting time trying to 
develop an eager dialect of Haskell (or Clean).


I wonder if current compilation technology for lazy Haskell (or Clean) has 
reached the theoretical limits on what is possible for the compiler to 
optimize away, or if it is just that optimization has not received so much 
attention as work on the type system etc?


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why is $ right associative insteadofleftassociative?

2006-02-05 Thread Brian Hulley

Ben Rudiak-Gould wrote:

Paul Hudak wrote:

Minor point, perhaps, but I should mention that : is not special
syntax -- it is a perfectly valid infix constructor.


 snip
... but no more confusing than the fact that [f x | x - xs] is
not the same as (map f xs).


Can you explain why? On page 258 of Paul Hudak's book The Haskell School of 
Expression he states that do x- xs; return (f x) is equivalent to [f x | x 
- xs] which is clearly just map f xs


I can't find anything wrong with the example in the book but perhaps I've 
missed something?


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-05 Thread Brian Hulley

Tomasz Zielonka wrote:

On Sun, Feb 05, 2006 at 04:36:44PM -, Brian Hulley wrote:

Just in case you are interested, in the preprocessor I'm writing,
I would write these examples as:

   (.) #
  f x y
  g x
  h x y
   $ z

and
a = #[
   first
   second
   third

where exp # {e0,e1,...} is sugar for let a = exp in a e0 (a e1 (a
... ) ...)) and #[ {e0, e1, ... } is sugar for [e0, e1, ...]
(exp # block and exp # block are the right and left associative
versions respectively and the special # sugar allows a layout block
to be started if it occurs at the end of a line)


Well... I care about change locality and the like, but I'm not sure
I would use such syntax (as a means of communication between
programmers). Perhaps that's because I am not used to it and it looks
alien. But it's rather because I still put readability first.


It is true that it looks quite alien at first, but consider that it allows 
you to use longer identifiers for function names (because they now only need 
to be written once) which could actually enhance readability eg


  Prelude.compose #
 f x y
 g x
 h x y
  $ z

so perhaps people would start using more real words instead of obscure 
symbols like =+= etc. Also, the less use of infix notation the better, 
because every infix symbol requires the reader to search for the fixity 
declaration then try to simulate a precedence parser at the same time as 
grappling with the semantics of the code itself. The #, # notation solves 
this problem by making the sugared associativity immediately visible, and 
the use of layout further enhances the direct visual picture of what's 
happening.


Anyway it's just an idea I thought I'd share- I'm sure there's no danger of 
it ever ending up in a future Haskell... ;-)


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Why is $ right associative instead of left associative?

2006-02-04 Thread Brian Hulley

Hi -
In the Haskell98 report section 4.4.2 $ is specified as being right 
associative. This means that f $ a0 a1 $ b0 b1 would parse as f (a0 a1 (b0 
b1)) which seems rather strange to me. Surely it would be much more useful 
if $ were defined as left associative so that it could be used to separate 
the args to f?


Does anyone know why this strange associativity was chosen?

Thanks, Brian.

(The reason I'm asking is that I'm working on the syntax of a language 
similar to Haskell but which uses layout to allow expressions like:


f #$ -- can be followed by an explicit 
block or layout block

   a0 a1
   b0 b1

which is sugar for (f $ a0 a1) $ b0 b1 ie f (a0 a1) (b0 b1) ) and I was 
surprised to discover that the parentheses are needed for the most obvious 
reading) 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of left associative?

2006-02-04 Thread Brian Hulley

Tomasz Zielonka wrote:

On Sat, Feb 04, 2006 at 02:52:20PM -, Brian Hulley wrote:

Hi -
In the Haskell98 report section 4.4.2 $ is specified as being right
associative. This means that f $ a0 a1 $ b0 b1 would parse as f (a0
a1 (b0 b1)) which seems rather strange to me. Surely it would be
much more useful if $ were defined as left associative so that it
could be used to separate the args to f?

Does anyone know why this strange associativity was chosen?


Probably it was anticipated that right associative version will
be more useful. You can use it to create a chain of transformations,
similar to a chain of composed functions:

   (f . g . h) x   =   f $ g $ h $ x

Example:

   map f $ group $ sort $ filter g $ l

But of course, left associative version can also be useful. Some
time ago I used a left associative version of the strict application
operator, which I named (!$).


I wonder if anyone has done empirical studies to determine scientifically 
which associativity would be more useful in practice eg by analysis of 
source code involving $ and comparing the number of parentheses that would 
be needed in each case, and perhaps also some studies involving the number 
of confused readers in each case...


Even though both versions are useful, it seems to me that faced with the 
choice of choosing an associativity for an operator that does function 
application, and given that prefix application is left associative, there is 
one clear winner, but unfortunately the Haskell committee didn't see it this 
way, and perhaps it is too late to ever change this (just like :: and : 
which were mixed up for reasons unknown). Especially since chains can 
already be composed using . .




Anyway, you can't always remove all parentheses. And why would you
want to? Everybody is used to them.


$'s advertised purpose is to remove parentheses, but I agree that 
parenthesized code is often more readable (especially when operators have 
unexpected fixities... :-))


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-04 Thread Brian Hulley

Brian Hulley wrote:

Tomasz Zielonka wrote:

On Sat, Feb 04, 2006 at 02:52:20PM -, Brian Hulley wrote:

Hi -
In the Haskell98 report section 4.4.2 $ is specified as being right
associative. This means that f $ a0 a1 $ b0 b1 would parse as f (a0
a1 (b0 b1)) which seems rather strange to me. Surely it would be
much more useful if $ were defined as left associative so that it
could be used to separate the args to f?

Does anyone know why this strange associativity was chosen?


Probably it was anticipated that right associative version will
be more useful. You can use it to create a chain of transformations,
similar to a chain of composed functions:

   (f . g . h) x   =   f $ g $ h $ x


Actually I'm beginning to think this might be more useful after all.



Example:

   map f $ group $ sort $ filter g $ l

But of course, left associative version can also be useful. Some
time ago I used a left associative version of the strict application
operator, which I named (!$).


I suppose I could use $$ for left associative application, and #$$ for 
layout application.




I wonder if anyone has done empirical studies to determine
scientifically which associativity would be more useful in practice
eg by analysis of source code involving $ and comparing the number of
parentheses that would be needed in each case, and perhaps also some
studies involving the number of confused readers in each case...

Even though both versions are useful, it seems to me that faced with
the choice of choosing an associativity for an operator that does
function application, and given that prefix application is left
associative, there is one clear winner, but unfortunately the Haskell
committee didn't see it this way, and perhaps it is too late to ever
change this (just like :: and : which were mixed up for reasons
unknown). Especially since chains can already be composed using . .


It would be very useful if the Haskell report explained *why* decisions were 
made, because there often seem to be good reasons that are not immediately 
obvious and sometimes counter intuitive. I think the mystery surrounding :: 
and : might have been that originally people thought type annotations would 
hardly ever be needed whereas list cons is often needed, but now that it is 
regarded as good practice to put a type annotation before every top level 
value binding, and as the type system becomes more and more complex (eg with 
GADTs etc), type annotations are now presumably far more common than list 
cons so it would be good if Haskell Prime would swap these operators back to 
their de facto universal inter-language standard of list cons and type 
annotation respectively.


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-04 Thread Brian Hulley

Tomasz Zielonka wrote:

On Sat, Feb 04, 2006 at 07:15:47PM -, Brian Hulley wrote:

I think the mystery surrounding :: and : might have been that
originally people thought type annotations would hardly ever be
needed whereas list cons is often needed, but now that it is
regarded as good practice to put a type annotation before every top
level value binding, and as the type system becomes more and more
complex (eg with GADTs etc), type annotations are now presumably far
more common than list cons so it would be good if Haskell Prime
would swap these operators back to their de facto universal
inter-language standard of list cons and type annotation
respectively.


I am not convinced. Even if you really want to write types for every
top-level binding, it's only one :: per binding, which can have a
definition spanning for many lines and as complicated type as you
want. On the other hand, when you are doing complicated list
processing, it is not uncommon to have four (or more) :'s per _line_.


I wonder if extending the sugared list syntax would help here. The | symbol 
is used for list comprehensions but something along the lines of:


   [a,b,c ; tail]  ===  a :: b :: c :: tail -- where :: 
means list cons


then there would seldom be any need to use the list cons symbol anywhere 
except for sections. I would use , instead of ; in the block syntax so 
that ; could be freed for the above use and so that there would be a 
generic block construct {,,,} that could be used for records also (and could 
always be replaced by layout) eg


   P {x=5, y=6}

could be written also as

   P #-- # allows a layout block to be started
 x = 5
 y = 6



Personally, I started my FP adventure with OCaml (which has the thing
the other way around), and I felt that the meanings of :: and : should
be reversed - before I even knew Haskell!


I see what you mean ;-). However the swapping of :: and : really is very 
confusing when one is used to things being the other way round. Also in 
natural language, : seems to have a much closer resonance with the 
type/kind annotation meaning than with constructing a list. I also wonder if 
it is such a good idea to make lists so special? Does this influence our 
thinking subconciously to use list-based solutions when some other data 
structure may be better?


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-04 Thread Brian Hulley

Stefan Holdermans wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Brian wrote:


I think the mystery surrounding :: and : might have been that
originally people thought type annotations would hardly ever be
needed whereas list cons is often needed, but now that it is
regarded as good practice to put a type annotation before every top
level value binding, and as the type system becomes more and more
complex (eg with GADTs etc), type annotations are now presumably far
more common than list cons so it would be good if Haskell Prime
would swap these operators back to their de facto universal
inter-language standard of list cons and type annotation
respectively.


I don't think Haskell Prime should be about changing the look and
feel of the language.


Perhaps it is just a matter of aesthetics about :: and :, but I really feel 
these symbols have a de-facto meaning that should have been respected and 
that Haskell Prime would be a chance to correct this error. However no doubt 
I'm alone in this view so fair enough - it's just syntax after all and I can 
run my own programs through a pre-processor if I want them the other way 
round... :-)


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-04 Thread Brian Hulley

[EMAIL PROTECTED] wrote:

G'day all.

Quoting [EMAIL PROTECTED]:


This is the way that I normally express it.  Partly because I find
function application FAR more natural than right-associative
application,


I meant to say that I find function COMPOSITION more natural than
right-associative application.  It certainly fits better with my
personal biases about good functional programming style.


Yes the case you've made for $ being left associative is very compelling - 
namely that the existing associativity actively encourages a *bad* 
programming style in which the right associative $ hides the composition in 
a chain of function applications instead of allowing the composition to be 
explicit and neatly separate from its argument.


Moreover, the existing associativity of $ implies that whoever thought it up 
was confusing two concepts: application and composition, instead of allowing 
$ to take its proper place as an equal citizen to ., with the 
associativity proper to its role as application alone.


Thus if $ were made left associative in Haskell Prime, this would add 
clarity to the thought forms associated with the language, which would 
(presumably) in turn lead to better programs being written in it.


Regards, Brian.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-03 Thread Brian Hulley

Bulat Ziganshin wrote:

Hello Wolfgang,

Friday, February 03, 2006, 1:46:56 AM, you wrote:

i had one idea, what is somewhat corresponding to this discussion:

make a strict Haskell dialect. implement it by translating all
expressions of form f x into f $! x and then going to the
standard (lazy) haskell translator. the same for data fields - add
to all field definitions ! in translation process. then add to
this strict
Haskell language ability to _explicitly_ specify lazy fields and
lazy evaluation, for example using this ~ sign


[Apologies for replying to a reply of a reply but I don't seem to have 
received the original post]


I've been thinking along these lines too, because it has always seemed to me 
that laziness is just a real nuisance because it hides a lot of inefficiency 
under the carpet as well as making the time/space behaviour of programs 
difficult to understand...


One question is how to get some kind of do notation that would work well 
in a strict setting.
The existing do notation makes use of lazyness in so far as the second arg 
of   is only evaluated when needed. Perhaps a new keyword such as go 
could be used to use = instead ie:


go {e1;e2;e3}   ===   e1 = (\_- (e2 = (\_-e3)))

Of course this doesn't solve the problem of how to translate programs that 
make heavy use of mapM etc.


I wonder: is monadic programming really dependent on lazyness or is there a 
realistic (ie not impossibly complicated) way to use monads in a strict 
setting?


A related question is: could monadic programming ever be as efficient as 
side-effect programming?


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-03 Thread Brian Hulley

Jan-Willem Maessen wrote:


I pointed out some problems with strict Haskell in a recent talk, but
I think it'd be worth underscoring them here in this forum.


Is the text of this talk or points raised in it available online anywhere?

snip There is one very difficult piece of syntax in a strict setting: 
The

*where* clause.  The problem is that it's natural to write a bunch of
bindings in a where clause which only scope over a few conditional
clauses.  I'm talking about stuff like this:

f x
  | p x   = . a ...a . a  a ...
  | complex_condition = . b .. b ... b ..
  | otherwise = . a ... b .
  where a = horrible expression in x which is bottom when
complex_condition is true.
b = nasty expression in x which doesn't terminate when p x
is true.
complex_condition = big expression which
 goes on for lines and lines
 and would drive the reader
 insane if it occurred in line.


Surely it would not be too difficult for the compiler to only evaluate the 
where bindings that are relevant depending on which guard evaluates to True 
ie in your example, the binding for a would be evaluated if p x is True, 
otherwise the complex_condition would be evaluated, and if True, b would be 
evaluated, otherwise a and b would be evaluated:


f x
| p x = let a = . in a a ...
| otherwise = let
complex_condition = ...
b = ...
 in
   if complex_condition then
   b  b
  else let
a = . a
 in
    a.b

where all the messy (possibly duplicated) let's are generated by the 
compiler so the user can still use the nice where syntax.


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-03 Thread Brian Hulley

Robin Green wrote:

On Fri, 3 Feb 2006 19:33:12 -
Brian Hulley [EMAIL PROTECTED] wrote:

I've been thinking along these lines too, because it has always
seemed to me that laziness is just a real nuisance because it hides a
lot of inefficiency under the carpet as well as making the time/space
behaviour of programs difficult to understand...

One question is how to get some kind of do notation that would work
well in a strict setting.
The existing do notation makes use of lazyness in so far as the
second arg of   is only evaluated when needed. Perhaps a new
keyword such as go could be used to use = instead ie:

go {e1;e2;e3}   ===   e1 = (\_- (e2 = (\_-e3)))


That's not necessary.  has something in common with if', where

if' True x _ = x
if' False _ y = y

- in both cases, it makes sense to evaluate the arguments lazily.

So simply make strictness the default and have laziness annotations
(for arguments), instead of making laziness the default and having
strictness annotations.


Where would you put these laziness annotations?
If you put them in the function declaration eg as in:

if' :: ~a - ~b - Bool

presumably you'd want the compiler to pass the args as thunks instead of 
evaluated values. However this means that all args to every function would 
have to be passed as thunks, even though for strict functions these thunks 
would immediately be evaluated. The problem is that there is no way for the 
compiler to optimize out the thunk creation / evaluation step because it 
occurs across the black box of a function call, thus we wouldn't get the 
same efficiency as in a language such as ML where no thunks are created in 
the first place.


Ie there is a fundamental asymmetry between lazy annotations and strict 
annotations - it is trivial to go from lazy to strict before the function 
body is evaluated but impossible to unevaluate from strict back to lazy...


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-03 Thread Brian Hulley

John Meacham wrote:

On Fri, Feb 03, 2006 at 07:33:12PM -, Brian Hulley wrote:

One question is how to get some kind of do notation that would
work well in a strict setting.
The existing do notation makes use of lazyness in so far as the
second arg of   is only evaluated when needed. Perhaps a new
keyword such as go could be used to use = instead ie:


you can override () in your monad

instance Monad ... where
a  b = a `seq` b `seq` (a = \_ - b)


unless I am misunderstanding what you want.

John


If strictness was the default (eg if the language were ML not Haskell), then 
in


putStr hello  putStr (show 1)

both args to  would be evaluated before  was called. Thus putStr (show 
1) would be evaluated before the combined monad is actually run, which would 
be wasteful if we were using a monad with a  function that only runs the 
rhs conditionally on the result of the lhs.
If Haskell were a strict language I think an equivalent for the do notation 
would have to lift everything (except the first expression) and use = 
instead of  .


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-03 Thread Brian Hulley

Brian Hulley wrote:

if' :: ~a - ~b - Bool

Oooops :-)

 if' :: Bool - ~a - ~a - a

Regards, Brian.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-03 Thread Brian Hulley

Brian Hulley wrote:

Robin Green wrote:

snip
So simply make strictness the default and have laziness annotations
(for arguments), instead of making laziness the default and having
strictness annotations.


Where would you put these laziness annotations?
If you put them in the function declaration eg as in:

if' :: Bool - ~a - ~a - a   [corrected]

presumably you'd want the compiler to pass the args as thunks instead
of evaluated values. However this means that all args to every
function would have to be passed as thunks, even though for strict
functions these thunks would immediately be evaluated. The problem is
that there is no way for the compiler to optimize out the thunk
creation / evaluation step because it occurs across the black box
of a function call, thus we wouldn't get the same efficiency as in a
language such as ML where no thunks are created in the first place.


I'm just s slow!!! ;-) Of course the laziness info would now be part of 
the function's type so the compiler would be able to generate the correct 
code to prepare thunks or evaluated values before calling the function. So 
your idea of laziness annotations for args would give the best of both 
worlds :-)


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Evaluating arithmetic expressions at run time

2006-01-28 Thread Brian Hulley

Andrew Savige wrote:

--- Cale Gibbard wrote:

Apart from moving to a lookup Map or something, a simple reordering
of the arguments allows you to shorten things up a bit:

myeval :: String - Int - Int - Int
myeval + = (+)
myeval - = (-)
myeval * = (*)
etc.


Thanks to all for the excellent suggestions. I'm liking the
Haskell version of this function a lot more now. :-)

To help me learn Haskell, I'm converting a small Ruby
program to Haskell. In Ruby this can be done in one line:

def myeval(x, y, op)
  x.send op, y
end


This could be done in Haskell by

myeval :: a-b-(a - b - c) - c
myeval x y op = op x y

eg myeval (+) 1 2

Regards, Brian.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Evaluating arithmetic expressions at run time

2006-01-28 Thread Brian Hulley

Brian Hulley wrote:

eg myeval (+) 1 2


myeval 1 2 (+)-- I *always* seem to make at least one mistake 
per post ;-)


(I originally wrote the code to take the op first, which is the usual 
Haskell convention so that you can do useful things with (myeval  someop) 
but then I noticed you had the op arg last in your Ruby example so I changed 
the code but forgot to change my example...) 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] unary pattern matching

2006-01-27 Thread Brian Hulley

John Meacham wrote:

On Fri, Jan 27, 2006 at 12:28:23PM +1100, Donald Bruce Stewart wrote:

john:

I have often wanted a shorthand syntax for testing if a value
matches a given pattern. I want to implement such an extension for
jhc but can't decide an appropriate syntax so I thought I'd ask the
group. basically I want something like

/Left (Just _)/   expands to

\x - case x of
Left (Just _) - True
_ - False


Something like pattern guards?

f x | Just _ - x = putStrLn something


hmm.. how about

(| Left (Just _) |)

since | cannot be used as a section so it can be unambigously lexed
as a different symbol. I think I like it. any other ideas?

John


(@ Left (Just _) @) would fit in with the use of @ in as-patterns
Also, the as-pattern syntax could be stolen for expressions so that [EMAIL PROTECTED] 
would evaluate to True or False when this syntactic form appears in an 
expression instead of a pattern ie to give the following equivalence:


(@ pat @) exp===   exp @ pat

Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Another idea for record field selection and better namespace management

2006-01-27 Thread Brian Hulley

Hi -

To avoid the problems with so many names being put into a module's 
namespace, data declarations could implicitly define sub-modules and 
class/instance declarations as follows:


module M where
 data Foo = FooCon {x : Int}

would declare (as seen from inside M)

Foo, Foo.FooCon, Foo.x

and would further declare x and FooCon as instances of a global value 
type-class and constructor type class as follows (where //varid denotes the 
global typeclass corresponding to (a record field called) varid, and //conid 
denotes the global typeclass corresponding to the constructor conid)


class //x a b where
 x : a - b

instance //x Foo Int where
x Foo.FooCon{x=p} = p

class //FooCon a b where
 FooCon : a - b

instance //FooCon Int Foo where
FooCon = Foo.FooCon

The class declarations (generated by the compiler) are global, and there is 
no danger of conflicts with user class declarations because of the // 
prefix.
The instance declarations (also generated by the compiler) would be inserted 
into the module itself.


This would give at least three advantages:

1) The same names could be used for fields in more than one data declaration 
and these would be resolved using the well known type class mechanism


2) Some exciting new programming paradigms become instantly available due to 
the extension of allowing constructor type classes, namely we could then get 
the effect of extensible data types eg


   data Col1 a = One a
   data Col2 a = One a | Two a

   useOne :: ( //One col a) = col - a
   useOne (One x) = x

This is a lot more powerful than just OOP, because we could have different 
views of a data type, selecting out those components which are relevant to 
particular operations:


   data Element = TerminalPunct | TerminalValue | NonTerminal | Push | 
Pop

   data Action = Push | Pop
   data Insertion = TerminalValue | NonTerminal | Push

without having to artificially construct various injections...

3) If x is a record field, then because a typeclass and instance has been 
automatically generated, (x p) already uses the type of p to determine which 
overloaded x to use, so all that remains is to introduce a sugar to get a 
form of function application syntax that binds stronger than prefix 
application. I suggest p^x=== (x p)

It is still possible to be completely explicit, by writing P.x p or p^P.x

I think this solves all the problems that I had with the value space idea, 
and is also a fairly conservative extension to Haskell as it stands at the 
moment, except I wonder if it is possible to implement type classes for 
constructors?


Regards, Brian.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Another idea for record field selection and betternamespace management

2006-01-27 Thread Brian Hulley

Apologies - I've noticed some mistakes corrected as follows:

Brian Hulley wrote:

class //x a b where
 x : a - b

class //FooCon a b where
 FooCon : a - b


   class //x a b | a - b where-- I think this fundep is 
correct
   x :: a-b-- I can never get used 
to having to write :: instead of :


   class //FooCon a b | b - a where
   FooCon :: a-b


This is a lot more powerful than just OOP, because we could have
different views of a data type, selecting out those components which
are relevant to particular operations:

   data Element = TerminalPunct | TerminalValue | NonTerminal |
Push | Pop
   data Action = Push | Pop
   data Insertion = TerminalValue | NonTerminal | Push

without having to artificially construct various injections...


No - these are not in fact views. A view would be given by multiple 
predicates eg (//Push a,//Pop a) = ... a ... as normal.


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Another idea for record field selection and betternamespace management

2006-01-27 Thread Brian Hulley

Another correction...

Brian Hulley wrote:

   data Col1 a = One a
   data Col2 a = One a | Two a

   useOne :: ( //One col a) = col - a
   useOne (One x) = x


should be

   useOne :: (//One a col) = col - a

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Avoiding name collisions by using value spaces instead of modules

2006-01-17 Thread Brian Hulley

Tomasz Zielonka wrote:

[A bit late reply - I've just returned from vacation]

On Sun, Jan 08, 2006 at 05:47:19PM -, Brian Hulley wrote:

All I'm proposing is that the compiler should do all this painful
work for you, so that you don't need to bother creating a different
file that then needs two import directives to achieve the effect I
want. Is there any case where you would *not* want a type to be
declared in its own module?


I can think of such cases - for example consider a set of mutually
recursive datatypes used to represent abstract syntax trees in some
language. Of course, I imagine that your modules could be introduced
in such a way that would still allow recursion, but it's simply more
natural for me to place all those declarations in one module named
Syntax or AST.


My idea was that modules would be hierarchical, so that for example you 
could have a module AST as follows:


module AST where

   data Data1 = ...
   data Data2 = ...

This would be equivalent to writing something like:

module AST where-- in file prefixPath/AST.hs
   import AST.Data1 as Data1 (Data1)-- using partially 
qualified module names (not yet supported AFAIK?)

   import AST.Data2 as Data2 (Data2)

module Data1 where-- in file prefixPath/AST/Data1.hs
   data Data1 = ...

module Data2 where
   data Data2 = ...

If the fully qualified name of the AST module was prefixSeq.AST then the 
fully qualified names of the DataN modules would be prefixSeq.AST.DataN


In other words, one file could contain a tree of modules without having to 
physically put each module into files arranged as a tree on disk with import 
directives.


An advantage of this would be that you'd no longer need to think up 
different field names for each record that you use to keep track of state 
when traversing a data structure - something that quickly becomes extremely 
difficult as things are at the moment for large modules.





It is quite simple to create a new layout rule. My idea with this is
that all lines should start with zero or more tab characters
(non-tab leading whitespace is disallowed),


All lines start with at least zero tab characters, trivially.


I could also have said, all characters in leading whitespace must be tabs.




and all layout blocks should start on a new line.


That's a good coding practice


Thanks - glad to see someone agrees with me! :-)


(yes, you can write like this in Haskell
already), making your code more change-friendly, which is especially
important when you use some version control tool. It would be nice if
this could be enforced by the compiler, at least as some kind of a
warning. I encourage you to add such option to some Haskell compiler,
or a coding policy checking tool :-)


Moreover, it is possible to completely dump the ugly let..in
construct, and make = one of the tokens that can start a new layout
block, so instead of:

   f x = let a = x+1
b = x + 2
   in a + b


How about

   f x =
   let
   a = x + 1
   b = x + 2
   in
   a + b


This is how I indent let..in at the moment. However I feel that let and 
in just takes up space, and while useful in describing the abstract 
syntax, I don't see what use it has in the concrete syntax of a language, 
apart from the fact that in the current layout rule, let is needed to 
introduce a block. I think that because let and in are rather redundant, 
people like to squash them into the same lines as non-redundant code, 
leading to:


   f x = let a = x + 1
b = x+2
   in a+ b

or a million times worse:

   f x = let a = a+1 ; b = x+2   -- mixing up two different ways of writing 
a block of things

c = a + b
   in c



Anyway, I would use where here.


I agree this would be neater in this case - sometimes let..in is still 
needed eg in a branch of an if construct etc.





one would simply write:

   f x =
   a = x+1
   b = x+2
   a + b


I don't like it. It's shorter, but less readable too.


Yet this is very similar to how you write functions in C++, C, Java, C# etc 
so for anyone used to these languages, it seems very natural ie


   f(x) {
   a = x+1;// Hurray! no need to write let :-)))
   b = x+2;// ((single assignment simulating binding))
   return a+b;
   }


You could simply
write:

  f x =
  a + b
   where
  a = x+1
  b = x+2


or even

   f x = a + b where
   a = x+1
   b = x+2

Regards,
Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Avoiding name collisions by using value spaces instead of modules

2006-01-17 Thread Brian Hulley

Tomasz Zielonka wrote:

On Sun, Jan 08, 2006 at 01:06:18PM -, Brian Hulley wrote:

5) We can get all the advantages of automatic namespace management
the OOP programmers take for granted, in functional programming, by
using value spaces as the analogue of objects, and can thereby get
rid of complicated import/export directives


There is nothing complicated in Haskell's module system. It's very
simple, explicit, independent from the type system and therefore easy
to understand.


There are many concepts that one needs to understand ie importing/exporting, 
qualified/unqualified, hiding, selecting, and a strange syntax that looks 
like tuples but isn't anything to do with tuples.



Verbose or low-level would be better accusations.


I agree with those as well.



It seems that you want to introduce some kind of C++'s Koenig's lookup
to Haskell. Is it your inspiration?


First argument lookup works well in C++, but perhaps wouldn't work nearly so 
well in Haskell due to the presence of polymorphism/higher order functions.


Another source of inspiration was Smalltalk, where instead of thinking in 
terms of files (modules) one just thinks in terms of objects(types) and 
methods(functions) and enters new types/functions via a browser instead of a 
text editor.


My real purpose is to try to find a way to be able to concentrate on 
values/types without having to worry about where to put them. Just as we 
take garbage collection (or other memory management) for granted, I'm trying 
to find some way of automatically managing the storage of value/type 
declarations.


In C# and Java, every class must be stored in a file of that name, and most 
C++ programmers follow this rule as a convention. Whereas in Haskell (or 
ML), where there are lots of small data declarations, I don't see any simple 
rule for where to put stuff. It is all too easy to end up with a gigantic 
module where one type has been converted into another and another etc etc 
and soon it is very difficult to think up different names for 
functions/variants/fields, and remember even where the functions are defined 
within the file, leading to much scrolling and search operations when 
editing code.



For me, C++ doesn't seem to be a
source of good ideas for programming language design ;-)


Certainly C++ has its problems, but I think some rather clever ideas can be 
salvaged from it (eg the use of traits in template programming) :-)


Best regards,
Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: What does the Haskell type system do withshow (1+2)?

2006-01-13 Thread Brian Hulley

Cale Gibbard wrote:

Snip

So long as we're going to have a defaulting mechanism, it seems a bit
odd to restrict it to Num, and to classes in the Prelude.


Instead of having literals such as 1 that could be Int, Integer, or Float 
etc, why not just have one Number type declared as something like:


data Number = NInt Int | NInteger Integer | NFloat Float | NDouble Double | 
NRational Integer Integer | NComplex Number Number


etc so that all numeric literals would just be translated by the compiler 
into a Number. Arithmetic ops would then not be overloaded but the compiler 
could hopefully optimize out the extra indirection caused by using Number 
instead of plain Int, Integer, etc.


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Avoiding name collisions by using value spacesinstead of modules

2006-01-10 Thread Brian Hulley

Brian Hulley wrote:

Cale Gibbard wrote:

Unifying these two under a single operation is certainly trickier,
and it's a little more questionable that it should be done at all,
given that their types are so different -- below is the closest I
could come to it off-hand.
snip


Thanks! I'm impressed. Obviously there is a lot more power in type
classes than I'd thought.


Of course, this still doesn't solve the original problem I was trying to 
address, namely that I want identifier bindings to be pulled into scope by 
their typed context (ie value type or return type + arg types) so that 
functional programmers could get the same advantages (in fact even more 
advantages) than OOP programmers.


With type classes, every use of any specific identifier, within the whole 
program, would have to be declared an instance of a single global type 
class, which would then be imported unqualified into the module so that you 
could write insert (1,2) m etc without having to qualify the word insert. 
(Because if these type classes/instances were imported qualified we'd just 
swap Set.insert for Collection.insert)


With the ty-tuple idea, all functions in a module are automatically 
organised into sub-modules and brought into scope only when needed, so every 
function in a program could be typed into a single file thus freeing the 
programmer from the onerous burden of having to work out where to put them 
manually. (The programmer would still put data declarations into different 
modules)


I must admit my thinking is strongly influenced by years of C++ programming, 
but so far I haven't seen any description of how one decides what module a 
function should be placed in in functional programming, and the existing 
module system seems like a pauper's alternative to static class methods in 
C++, C#, or Java, since in Haskell, you need to use the same name for the 
module and the type (for Data.Set etc) yet this choice of same name is not 
enforced by the language even though the user thinks of them as being the 
same, and in fact two import directives are needed to maintain the illusion 
that they are the same so you can write Set a and Set.insert etc...


Regards,
Brian Hulley 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Intersection types for Haskell?

2006-01-10 Thread Brian Hulley

Hi -
I'm wondering if there is any possiblility of getting intersection types 
into Haskell. For example, at the moment there is no (proper) typing for:


   f g x y = (g x, g y)

Ideally, I'd like to be able to write:

   f:: (a - b  c - d) - a - c - (b,d)

or

   f :: (a - b a) - c - d - (b c, b d)

which is perhaps clearer and prevents bad types such as (Int - String  
Int - Char) by construction.


While it may be impossible (?) to infer such a type for f, would it be 
possible to make use of such an annotation (esp since Haskell with GHC 
extensions already has arbitary rank polymorphism)?


Also, as a second point, could functional dependencies in type classes be 
written using a similar syntax eg instead of


   class Insert t c a | c a - t where
   insert :: t - c a - c a

we could write:

   class Insert (h (c a)) c a where
   insert :: h (c a) - c a - c a

or
   class Insert t@(h (c a)) c a where   -- re-using as-pattern syntax
   insert :: t - c a - c a

to avoid having to have a special syntax just for functional dependencies 
and/or to be able to write more complicated fundeps more succinctly?


Regards,
Brian Hulley 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Intersection types for Haskell?

2006-01-10 Thread Brian Hulley

José Miguel Vilaça wrote:

Hi

If I understand your problem than the following is a solution:

--

{-# OPTIONS -fglasgow-exts #-}

class Foo a b where
   g :: a - b

type A = {- change the following -} Int
type B = {- change the following -} Char

instance Foo A B where
   g a = {- change the following -} ' '

type C = {- change the following -} Float
type D = {- change the following -} String

instance Foo C D where
   g c = {- change the following -} 


f :: (Foo a b, Foo c d) = a - c - (b, d)
f x y = (g x, g y)


Thanks for the workaround. However this does not seem to be quite so general 
as intersection types, because it would only allow me to define f for some 
specific g ie the g of Foo, rather than for any general function...


Regards,
Brian Hulley 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Intersection types for Haskell?

2006-01-10 Thread Brian Hulley

Taral wrote:

On 1/10/06, Brian Hulley [EMAIL PROTECTED] wrote:

Hi -
I'm wondering if there is any possiblility of getting intersection
types into Haskell. For example, at the moment there is no (proper)
typing for:

f g x y = (g x, g y)

Ideally, I'd like to be able to write:

f:: (a - b  c - d) - a - c - (b,d)


I have no idea what kind of function would have type (a - b  c -
d). Can you give an example?


g x = x

because g 3 = 3 so g has type Int - Int but also g 'a' = 'a' so g has type 
Char - Char hence g has type Int - Int  Char - Char


Also, h x = (x,x) ie Int - (Int,Int)  Char - (Char,Char)

The reason I can't just use a - b for g's type is that then I would have no 
way to write out the result of f, since it is not (b,b)





f :: (a - b a) - c - d - (b c, b d)


f :: (forall a. a - b a) - c - d - (b c, b d)


That would be nice but unfortunately is not accepted by GHC because it 
cannot unify a-a with a - b a, for example the following does not compile:


{-# OPTIONS -fglasgow-exts #-}

f :: (forall a. a - b a) - c - d - (b c, b d)
f g x y = (g x, g y)

g x = x

main = do
putStrLn (show (f g 3 'c'))

Regards,
Brian Hulley

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Intersection types for Haskell?

2006-01-10 Thread Brian Hulley

Brian Hulley wrote:

Taral wrote:

I have no idea what kind of function would have type (a - b  c -
d). Can you give an example?


g x = x

because g 3 = 3 so g has type Int - Int but also g 'a' = 'a' so g
has type Char - Char hence g has type Int - Int  Char - Char


Actually I should have said g's type *matches* (Int-Int  Char-Char) 
since of course g has type a-a.


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Intersection types for Haskell?

2006-01-10 Thread Brian Hulley

Brian Hulley wrote:

snip
which is perhaps clearer and prevents bad types such as (Int -
String  Int - Char) by construction.


Oops! I forgot that functions with such types can exist via multi-parameter 
type classes and overloading - this may be one reason why intersection types 
have not yet been added to Haskell.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Avoiding name collisions by using value spaces instead of modules

2006-01-08 Thread Brian Hulley
 of the Set module (every type is also a module) for a binding 
for singleton which maps from a to Set a.


Finally (apologies for this long post), returning to the use of ^ to allow 
an object oriented way of thinking consider:


   insert :: a - Set a - Set a
   ps = singleton 3
   qs = insert 4 ps
   rs = ps^insert 4

When resolving insert used in the binding for rs, the compiler should see 
that we are looking for some function Set Int - Int - Set Int, and hence 
will be looking in the current module augmented by the Set module. However 
the Set module only has a binding for insert with type a - Set a - Set a. 
So the compiler should generate a new function insert' from insert by moving 
the first Set a arg to the front.


Summary:

1) Every value binding belongs to a value space represented by a ty-tuple 
which abstracts away from the details of the actual type


2) The space of ty-tuples forms a lattice, and in particular this means 
qualification is optional when there are no conflicts (we can use Leaf 3 
instead of Tree.Leaf 3), and we can use partial qualification to supply just 
the disambiguation info needed.


3) The TI algorithm should use the type of the expression (generated 
top-down from the top-level annotation) to search in the correct value 
spaces to resolve identifiers


4) Record field selection doesn't need any special scoping rules, and is 
just one example of a general object-oriented way of thinking about function 
application.


5) We can get all the advantages of automatic namespace management the OOP 
programmers take for granted, in functional programming, by using value 
spaces as the analogue of objects, and can thereby get rid of complicated 
import/export directives and improve code readability with less typing (but 
making more use of typing - excuse the pun :-))


I'm in the process of trying to implement a pre-processor for Haskell which 
would use the algorithms sketched above (as well as fixing a few other 
things I think are wrong with Haskell such as the way the current layout 
rule allows one to write code that would break if you replaced an identifier 
with one that had a different number of characters in it etc), so any 
feedback on these ideas would be welcome.


Thanks for reading so far!

Brian Hulley

PS Everything above is purely intended for resolving the kind of ad-hoc 
overloading that is not amenable to treatment by type classes (which 
transforms some subset of the ad-hoc-overloaded functions which share the 
same shape into a single function with implicit dictionary arguments (in the 
usual implementation)).


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Avoiding name collisions by using value spaces instead of modules

2006-01-08 Thread Brian Hulley


- Original Message - 
From: Daniel Fischer [EMAIL PROTECTED]

To: Brian Hulley [EMAIL PROTECTED]
Cc: Haskell-cafe haskell-cafe@haskell.org
Sent: Sunday, January 08, 2006 3:47 PM
Subject: Re: [Haskell-cafe] Avoiding name collisions by using value spaces 
instead of modules




Am Sonntag, 8. Januar 2006 14:06 schrieb Brian Hulley:

Hi -
A main problem I've found with Haskell is that within a module, too many
things are put into the same scope. For example

data Tree a b = Leaf a | Node {elem::b, lhs::Tree a b, rhs::Tree a b}

 snip
I propose that the above declaration should introduce a *new module* 
Tree,
as a sub module of the containing module, and Leaf, Node, elem etc will 
be

put into this module, and not the module containing the data declaration
itself.


What speaks against putting the data declaration in a separate module:

module ThisKindOfTrees where

data Tree a b = ...

and then use qualified imports (with a short alias), if you want to use
different kinds of trees in one module?


All I'm proposing is that the compiler should do all this painful work for 
you, so that you don't need to bother creating a different file that then 
needs two import directives to achieve the effect I want. Is there any case 
where you would *not* want a type to be declared in its own module?



Yes, more files, but, IMHO, much more readable.


In what way is it more readable?


snip
For example, for

module M where
foo :: forall b. Eq a =  Set (Tree a) - Tree ([a],b) - [Tree (a,b)]

we ignore the forall and Eq, and replace tyvars and tuple the results to
get:

   M.(Set (Tree ?), Tree ( (,) ( [] ?) ?), [] (Tree ( (,) ? ?))).foo

as the fully qualified reference to the entity we've just declared.


Looks absolutely horrible to me. What would we gain?
We could have

M.(Set (Tree ?), Tree ( (,) ( [] ?) ?), [] (Tree ( (,) ? ?))).foo
and
M.(Int, Set (Tree ?), Tree ( (,) ( [] ?) ?), [] (Tree ( (,) ? ?))).foo

but why? Do we really want it? I certainly don't.


I agree that I would certainly not want to have to write out the fully 
qualified name (or superfluously qualified name as you point out with 
Int,...), but I think we would gain a great deal from this, because just by 
making a declaration in module M, we've effectively created an infinite 
number of child modules that the declaration belongs to, without having to 
create an infinite number of files and write an infinite number of import 
directives in M.


For example, suppose I'm writing a module M that deals with grammar, where 
the elements in a grammar rule are parameterised so that rules can be 
written using strings but processed as if we'd used ints instead:


   data Element a = Terminal a | Nonterminal a | Action a

   data Rule a = Rule (Element a) [[Element a]]

Now I want to convert elements and rules from a to Int, so at the moment I 
have to write:


   convertElement :: Element a - CM (Element Int)
   ...

   convertRule :: Rule a - CM (Rule Int)

for some appropriate monad CM.
Whereas I would have much preferred to use just the word convert in both 
cases. It is tremendously annoying to have to suffix everything with the 
type name.


In another situation, suppose we have two types T1 and T2, and some function 
convert :: T1 - T2
The problem I have is which module (if I used a separate module for T1 and 
T2), should I put the convert function in? Essentially I think it belongs to 
the space of relations between T1 and T2, hence my idea to use tuple 
notation to get a module called (Q1,Q2) eg (Set,Map). But I certainly don't 
want the bother of having to create a new file and type import directives 
into M every time I want to define a function on some different relation 
space.


Really I don't want to have to think about modules at all, since I'd like to 
write code that organises itself into modules (using these ty-tuples and 
top-down type/identifier-resolution inference) so I can concentrate on typed 
values and relations between them without all the module-level plumbing.



snip

you'd havoc because sometimes you've just made an error -- which wouldn't
then be spotted by the type-checker.


I agree this could be a disadvantage - ease of coding is gained but some 
kinds of errors cannot be caught so easily.




Finally (apologies for this long post), returning to the use of ^ to 
allow

an object oriented way of thinking consider:

insert :: a - Set a - Set a
ps = singleton 3
qs = insert 4 ps
rs = ps^insert 4

When resolving insert used in the binding for rs, the compiler should 
see
that we are looking for some function Set Int - Int - Set Int, and 
hence
will be looking in the current module augmented by the Set module. 
However
the Set module only has a binding for insert with type a - Set a - Set 
a.

So the compiler should generate a new function insert' from insert by
moving the first Set a arg to the front.


Automatic permutation of arguments? Has its merits, but goodbye to
type-safety, I believe

Re: [Haskell-cafe] Avoiding name collisions by using value spaces instead of modules

2006-01-08 Thread Brian Hulley


- Original Message - 
From: Cale Gibbard [EMAIL PROTECTED]

To: Brian Hulley [EMAIL PROTECTED]
Cc: Daniel Fischer [EMAIL PROTECTED]; Haskell-cafe 
haskell-cafe@haskell.org

Sent: Sunday, January 08, 2006 5:54 PM
Subject: Re: [Haskell-cafe] Avoiding name collisions by using value spaces 
instead of modules




On 08/01/06, Brian Hulley [EMAIL PROTECTED] wrote:
For example, suppose I'm writing a module M that deals with grammar, 
where

the elements in a grammar rule are parameterised so that rules can be
written using strings but processed as if we'd used ints instead:

data Element a = Terminal a | Nonterminal a | Action a

data Rule a = Rule (Element a) [[Element a]]

Now I want to convert elements and rules from a to Int, so at the moment 
I

have to write:

convertElement :: Element a - CM (Element Int)
...

convertRule :: Rule a - CM (Rule Int)

for some appropriate monad CM.
Whereas I would have much preferred to use just the word convert in 
both

cases. It is tremendously annoying to have to suffix everything with the
type name.



This is what typeclasses are for.



class Convert c where
   convert :: c a - CM (c Int)


Type classes just seem overkill for this kind of thing. All I want is 
compile time resolution of an overloaded identifier, whereas type classes 
give all the machinery that would be needed if I wanted runtime ad-hoc 
polymorphism, with all the attendant verbosity, just so that the compiler 
can then optimize out the runtime polymorphism behind the scenes for cases 
like the example above.


After all, I just want to write two very simple functions, so the effort to 
factor into a type class + two instances, also having to include the Convert 
c in the context whenever I call one of them just seems really painful. 
Also, the word Convert is now used up as well...


Also, when I'm just writing code in an exploratory way, I don't know in 
advance what the common things are that could be factored out into type 
classes (except perhaps in very simple examples like that above), so while 
I'm writing the code for the first time there is still a problem trying to 
think up different names.


Regards,
Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Avoiding name collisions by using value spaces instead of modules

2006-01-08 Thread Brian Hulley

Cale Gibbard wrote:

snip


Thanks for the illustration - I see another advantage with type classes is 
that you only need to write the type signature once (in the class 
declaration) instead of before each instance binding.



Secondly, if the functions are really different, and you never plan to
use them polymorphically, why the heck would you want to call them the
same thing? That's just confusing to anyone that has to read the code
later.


For example, Data.Map declares:

insert :: Ord k = k - a - Map k a - Map k a

whereas Data.Set declares:

insert :: Ord a = a - Set a - Set a

This is an example where type classes can't be applied even though the 
functions in a sense do the same thing. My system would solve this problem, 
by allowing the programmer to type d = insert a b c and have the type 
inference algorithm work out that Data.Map.insert was meant, as long as c or 
d has been typed as Map p q.


But perhaps there is a way to get the signature for Data.Map.insert into the 
same form as that of Data.Set.insert?


Regards, Brian.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Avoiding name collisions by using value spaces instead of modules

2006-01-08 Thread Brian Hulley

Cale Gibbard wrote:

Unifying these two under a single operation is certainly trickier,
and it's a little more questionable that it should be done at all,
given that their types are so different -- below is the closest I
could come to it off-hand.

---
{-# OPTIONS_GHC -fglasgow-exts #-} -- for fundeps/multiparameter
classes import qualified Data.Map as Map
import Data.Map (Map)
import qualified Data.Set as Set
import Data.Set (Set)

class Insert t c a | c a - t where
insert :: t - c a - c a

instance (Ord a) = Insert a Set a where
insert x s = Set.insert x s

instance (Ord k) = Insert (k,a) (Map k) a where
insert (k,v) m = Map.insert k v m

exampleSet = insert 5 $ insert 6 $ Set.empty
exampleMap = insert (1,2) $ insert (2,7) $ Map.empty



Perhaps someone else will have some ideas as to suitable typeclass
magic to allow for the curried form rather than using tuples.

 - Cale



Oh, this is a little less general, but simpler to use:

{-# OPTIONS_GHC -fglasgow-exts #-}
import qualified Data.Map as Map
import Data.Map (Map)
import qualified Data.Set as Set
import Data.Set (Set)

class Insert t c | c - t where
insert :: t - c - c

instance (Ord a) = Insert a (Set a) where
insert x s = Set.insert x s

instance (Ord k) = Insert (k,a) (Map k a) where
insert (k,v) m = Map.insert k v m

exampleSet = insert 5 $ insert 6 $ Set.empty
exampleMap = insert (1,2) $ insert (2,7) $ Map.empty


Thanks! I'm impressed. Obviously there is a lot more power in type classes 
than I'd thought. I hadn't realised that you could separate the Ord a and 
Ord k from the type signature in the class declaration, and put them in 
instance declarations like that (for example).
It would be really interesting to see how far one could go in factoring all 
the collection type functions/values into type classes.


Best regards,
Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


<    1   2   3   4