Re: "where" block local to a guard?

2002-09-17 Thread Brian Boutel



Dr Mark H Phillips wrote:
> Thanks for the explanation!
> 
> On Tue, 2002-09-17 at 19:07, Brian Boutel wrote:
> 
>>You can't do this because where clauses are not part of the expression 
>>syntax. If they were, expressions like
>>
>>  let a=b in c where d=e
>>or
>>  if a then b else c where d=e
>>
>>whould be ambiguous, unless you adopt arbitrary rules about the 
>>prededences, and such arbitrary rules are considered a bad thing.
> 
> 
> I'm trying to see how ambiguity might arise.  Do you mean something
> like:
> 
> let a=1 in a+a where a=3

Yes.

> 
> or have you something different in mind?
> 
> And I can't yet think of a situation where
> 
> if a then b else c where d=e
> 
> would cause problems.
> 

The question is whether the local definition of d scopes over the whole 
conditional expression, or just over the else part. As in

if a then d else 2*d where d==e

Is the first d the one defined in the where, or one from an enclosing 
declaration?

It's instructive to write a grammar quite abstractly, with rules like

exp <- if exp then exp else exp
  | let decls in exp
  | exp where decls
  | ...

decls <- decl | decl decls
decl <- var = exp

and then feed this into a parser generator like yacc, and look at the 
conflicts it generates. Each one has to be resolved by a shift or reduce 
decision, or possibly a choice between two reduces, but whichever is 
chosen, code written on a mistaken assumption will still compile but 
produce a semantically incorrect program.

--brian

-- 
Brian Boutel
Wellington New Zealand



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: "where" block local to a guard?

2002-09-17 Thread Brian Boutel



Dr Mark H Phillips wrote:
> Hi,
> 
> Suppose you have some function
> 
> functn :: Int -> Int
> functn i
> | i>5   = t  * i
> | i>0   = t_ * i
> | otherwise = 1
> where
> t  = functn (i-2)
> t_ = functn (i-1)
> 
> Notice that t and t_ are really local to a guard, rather
> than to the whole guard section.  Why then, can't you write:
> 
> functn :: Int -> Int
> functn i
> | i>5   = t * i
> where
> t = functn (i-2)
> | i>0   = t * i
> where
>   t = functn (i-1)
> | otherwise = 1
> 
> In particular, the above would mean you wouldn't need two names 
> t and t_, you could just use t for both!
> 
> Am I doing something wrongly, or is there a good reason why
> where isn't allowed to be used in this way?
>

You can't do this because where clauses are not part of the expression 
syntax. If they were, expressions like

let a=b in c where d=e
or
if a then b else c where d=e

whould be ambiguous, unless you adopt arbitrary rules about the 
prededences, and such arbitrary rules are considered a bad thing.

Very early on in the design of Haskell, the issue of how to deal with 
the two alternative styles for local definitions (let and where) was 
resolved by only allowing let in the expression syntax, but allowing 
where as part of the equation syntax. The nice thing about this is that, 
apart from removing ambiguities in expressions, where syntax provides a 
way of writing definitions which span several guarded right-hand-sides, 
and in the simple case of a single rhs, looks just like a where 
expression, so allows people to write effectively in that style.

--brian

-- 
Brian Boutel
Wellington New Zealand



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Simpler Fibonacci function

2002-02-10 Thread Brian Boutel

You are, of course, welcome to write a new tutorial that remedies the 
deficiencies you find in the original. I encourage you to do so.


Eray Ozkural (exa) wrote:


> 
> Thanks for pointing out. Nevertheless, the tutorial does have room for 
> improvement.
> 
>


--brian

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: varying number of arguments restriction

2001-10-30 Thread Brian Boutel

Ashley Yakeley wrote:
> 
> At 2001-10-30 11:01, Hal Daume wrote:
> 
> >obviously i can rewrite:
> >
> >foo [] = ""
> >foo s  = (snd . head) s
> >
> >but this is uglier.
> 
> I'm not sure. I actually prefer it written out so that the number of
> arguments in the cases matches (as GHC enforces).
> 

It's defined in the Report, not a GHC idiosyncracy. As to why, I don't
really remember, but I suspect it had to do with a desire by some
members of the Haskell Committee to require that the patterns in all
clauses of a function binding were disjoint, so that reasoning about
programs could deal with each clause independently. This was not
adopted, and the alternative top-to-bottom, left-to-right, semantics
were, but there was still a feeling that good style demanded
disjointness.

In that style, the second clause could be written

foo ((x,y):xs) = y

I don't know whether this is still true, but it used to be argued that
this was likely to be more efficient because compilers could produce
really good pattern-matching code.

--brian

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Haskell 98 - Standard Prelude - Floating Class

2001-10-23 Thread Brian Boutel

[EMAIL PROTECTED] wrote:
> 
> 
> How about making default method for asin
> 
> asin x = atan(x/sqrt(1-x^2))
> 
> Can't be worse than the default for (**) ;-)
> 

Oh, it can. As well as its own problems when x is close to 1, it
inherits, through the default definition of sqrt, the problems of (**)
when x is near 0.

--brian

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: strong typing is not a panaceum, and, anyway...

2001-10-19 Thread Brian Boutel

Jerzy Karczmarczuk wrote:
> 
> Brian Boutel to Sergey Mechveliani:
> 
> > > There is no scientific reason why  all  computations with types and
> > > type resolution should preceed all computations with non-types.
> 
> > No scientific reason, but a strong engineering reason.
> >
> > The engineering idea is to test a design with all available tools before
> > building it. That way there will be no disasters that could have been
> > forseen. The computing equivalent of an engineering disaster is for a
> > program to get a run-time error or to produce an incorrect result. If
> > this outcome is acceptable, then the program probably wasn't important
> > enough to be worth writing in the first place.
> 
> If an entity is sufficiently complex, there will be always a margin of
> error. Good if avoidable, but...
> 
> Would you apply the same philosophy of "non-importance" of a possibly bugged
> result, to procreating children?...

Comparing breeding children to programming is surely a little
far-fetched. I always enjoyed programming, but not nearly as much as
procreating ;-)

Anyway, procreating children is not science, nor yet engineering. It
must be art, where the concept of "bug" does not exist.

--brian

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: strong typing?

2001-10-18 Thread Brian Boutel

"S.D.Mechveliani" wrote:
> 

> I am not a specialist and can mistake and confuse things, but I
> wonder
> whether a notion of a strongly typed language is so
> scientifically important.
> The same is with the `compile-time' and `run-time' separation.
> There is no scientific reason why  all  computations with types and
> type resolution should preceed all computations with non-types.
> Very often the types need to behave like ordinary data.
> Would it be reasonable to avoid as possible the restriction of
> strong typing in language specification?

No scientific reason, but a strong engineering reason. 

The engineering idea is to test a design with all available tools before
building it. That way there will be no disasters that could have been
forseen. The computing equivalent of an engineering disaster is for a
program to get a run-time error or to produce an incorrect result. If
this outcome is acceptable, then the program probably wasn't important
enough to be worth writing in the first place.

--brian

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Tab "\t" character behaviour in (Win)hugs/ghc

2001-09-18 Thread Brian Boutel



Sigbjorn Finne wrote:
> >
> > What does the language definition say about this?
> 
> Nothing at all, I believe, but the convention is for tab characters
> to be interpreted by an output device as moving the cursor to
> the next tab stop/alignment column. In the absence of any custom
> set of tab stops, the convention is to space them evenly every
> 8 characters.
>

Actually, Appendix B3 of the Haskell 98 Report says

The "indentation" of a lexeme is the column number indicating the start
of that lexeme; the indentation
of a line is the indentation of its leftmost lexeme. To determine the
column number, assume a
fixed-width font with this tab convention: tab stops are 8 characters
apart, and a tab character causes
the insertion of enough spaces to align the current position with the
next tab stop. For the purposes of
the layout rule, Unicode characters in a source program are considered
to be of the same, fixed,
width as an ASCII character. The first column is designated column 1,
not 0. 

--brian

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell

___
Hugs-Bugs mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/hugs-bugs



Re: More feedback on Haskell 98 modules from the Programatica Team

2001-08-10 Thread Brian Boutel

Simon Peyton-Jones wrote:
> 
> | However, I think there is a risk that name clashes may be
> | introduced. If module A defines and exports f, and imports
> | (qualified) and exports module B, which also defines f, then
> | a module C that imports A has two fs, both of which have the
> | qualified name A.f in C, even though there is no conflict in
> | A. This suggests that only unqualified imports should be
> | exported in a "module X" export list entry.
> 
> The Report already covers this point (though I don't have it to hand).
> There must be no name clashes among the *unqualified* names of the
> exported things; so in your example, module A's export list is illegal.
> 

I'm not sure about this.

The report says (about export lists):

5.The set of all entities brought into scope from a module m by one or
more unqualified import declarations may be named by the form `module
m', which is equivalent to listing all of the entities imported from the
module.

My example was:
module A (f, module B) where
import B (g)
import qualified B (f)
f = ...

At present, this is legal, and a module C, which imports A, sees f
(defined in A) and g (imported from B). B's f is imported for local use
in A, is qualified to avoid a name clash in A, and is not exported by A.

You are proposing to drop the word "unqualified" from the rule, which
would result in the addition of B's f to A's export list, creating a
name clash there. As you say, this is detected as an error in A. My
point was that there is no error with the present wording of the report. 

Qualified import is a mechanism for avoiding name clashes, but export of
qualified names changes the module part of the name, with the risk of
creating a name clash. This is an argument for keeping the requirement
that export of qualified names is explicit, so that the programmer can
check the validity of each one as it is written, and not allowing bulk
export of all imported qualified names through a "module X" export list
item.

--brian

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: infelicity in module imports

2001-07-12 Thread Brian Boutel

Wolfgang Lux wrote:
> 
> Brian Boutel wrote
> 
> > Option 2 is closer to what the syntax of imports, read as English, suggests
> > is intended, but, if it wasn't for that, I'd be promoting option 1.
> > The primary purpose of being able to restrict imports is to avoid name
> > clashes, and with qualified import there is no risk of a clash, so no need
> > for restrictions.
> 
> This is not true since Haskell allows for the renaming of modules on
> imports. If you look at the example in section 5.3.2 of the report,
> there is the example
> 
>   module M where
> import qualified Foo as A
> import qualified Bar as A
> x = A.f
> 
> Obviously there is a name clash if both, Foo and Bar export symbol f.
> 

Obviously you can rename modules to create a name clash, but it seems a
silly thing to do.

What I was trying to say was that in

 module M where
   import  Foo 
   import  Bar hiding f

where both Foo and Bar export f, there is no reason to not import all
the qualified names in Bar, because no name clash will result if you do.

However, this argument is now moot, becaue the resolution of this issue
is now clear, and is good enough.

 --brian

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: infelicity in module imports

2001-07-12 Thread Brian Boutel

Simon Peyton-Jones wrote:
> 
> | There is still a
> | strange asymmetry, too. Whereas adding "qualified" to "import
> | Modname ( a, b, c)" doesn't change which entities are
> | imported, just the ability to refer to them by unqualified
> | names, adding qualified" to "import Modname hiding ( a, b,
> | c)" has the effect of importing everything that was previously hidden.
> 
> Not so.  I hope the Report now unambiguously states that
> 
> import M hiding (a,b,c)
> import qualified M hiding(a,b,c)
> 
> imports exactly the same entities (namely all that M exports
> except a,b,c), only in the latter case only the qualified names are
> brought into scope.
> 
> Can you suggest a way I could state it more clearly in the Report?
> 

I'm sure the version of the report I looked at yesterday still said that
hiding clauses had no effect in qualified imports, which was the basis
of my remark, but today's version clearly doesn't.

I think the latest version is about as clear as we will get.

--brian

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: infelicity in module imports

2001-07-11 Thread Brian Boutel

Option 2 is closer to what the syntax of imports, read as English, suggests
is intended, but, if it wasn't for that, I'd be promoting option 1.
The primary purpose of being able to restrict imports is to avoid name
clashes, and with qualified import there is no risk of a clash, so no need
for restrictions.

Even with option 2, there is scope for confusion. "Import" without
"qualified", imports both qualified and unqualified names, but adding the
word "qualified" doesn't make any difference to the position of qualified
names, but instead silently fails to import unqualified names.
There is still a strange asymmetry, too. Whereas adding "qualified" to
"import Modname ( a, b, c)" doesn't change which entities are imported, just
the ability to refer to them by unqualified names, adding qualified" to
"import Modname hiding ( a, b, c)" has the effect of importing everything
that was previously hidden.

Personally, I think the right solution is to import entire modules (the
exported parts) qualified, and optionally to allow unqualified reference to
some or all names, with a syntax like
import modid [as modid] [unqualifying ( [all except] impspec] | all)

but it's probably too late for this.

--brian



Simon Peyton-Jones wrote:

> Folks
>
> It seems that I forgot to send this message a couple of weeks ago.
> Assuming that silence meant assent, I implemented the proposal below
> in the report I put out yesterday.  But in this case silence meant you
> hadn't
> been asked (an excellent way to reach consensus that I must remember
> for the future).
>
> So here's the message anyway.  I don't think it's controversial, since
> it's
> the outcome the cognoscenti were seeking, and no one else will care.
> Well, so I hope!
>
> Simon
>
> | > In short, an import *always* brings the entire *qualified*
> | > set of names into scope. Hiding and revealing applies only
> | > to unqualified names.  I must say that I thought GHC implemented
> | > this rule; if not I should fix it.
> |
> | That's not my reading of the report, and it's not what GHC implements.
> |
> |   import A (f)
> |
> | brings only f and A.f into scope.
>
> How embarassing.  Now I look at it (yet) again, the report is certainly
> ambiguous about whether
> import A(f)
> imports A.g as well.  But SimonM is right to say that the implication
> is that it does *not*  (contrary to my earlier message).  But if it does
> not,
> then the treatment of hiding and explicit-listing is inconsistent, which
> is
> a Bug.
>
> There are two consistent positions
>
> 1. Every import of module A (no matter how constrained) imports
> all of A's exports with qualified names.  Import of qualified names is
> unaffected by both hiding clauses and the explicit entity list
>
> 2. The explicit entity list, or hiding clause, for an import determines
> which
> entities are imported.  The qualified names of all these entities are
> brought
> into scope; in addition, for an unqualified import the unqualified names
> are
> brought into scope too.
>
> Everyone who has spoken favours (2), and indeed GHC implements it.
> So I propose to change the report to say that much more explicitly.
>
> Any objections?
>
> Simon
>
> ___
> Haskell mailing list
> [EMAIL PROTECTED]
> http://www.haskell.org/mailman/listinfo/haskell

-


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Happy and Macros (was Re: ANNOUNCE: Happy 1.10 released)

2001-05-11 Thread Brian Boutel

"Carl R. Witty" wrote:
> 
> "Manuel M. T. Chakravarty" <[EMAIL PROTECTED]> writes:
> 
> > I don't think, the point is the test for non-ambiguity.  At
> > least, Doitse's and my self-optimising parser combinator
> > library will detect that a grammar is ambigious when you
> > parse a sentence involving the ambiguous productions.  So,
> > you can check that by parsing a file involving all grammar
> > constructs of the language.
> 
> Sorry, doesn't work.  Where do you get this "file involving all
> grammar constructs of the language"?
> 
> If such an approach worked, you could use it to determine whether an
> arbitrary context-free grammar was ambiguous; but this problem is
> undecidable.
> 

This illustrates the difference between generality and usefulness.

Often, one is less interested in learning that a grammar is ambiguous
than learning that it is not. 

If you have a parser generator for a class of grammars, it will tell you
if (and probably why) a candidate grammar you feed to it is not a member
of that class. If it is accepted by, for example, a parser generator for
LR(1) grammars, then it is a DCFG and therefore unambiguous.

If you start with a "natural" grammar for the language, i.e. one that
corresponds to the abstract syntax, and use a generator for a broad
class of grammars (e.g LR(1)) then failure will give a good hint that
the only way to get an unambiguous grammar in that class is to restrict
the language, and you can use that information to make design decisions.

For example, although Haskell has both 'let' and 'where' for introducing
local definitions, thereby reflecting the typical committee decision
when there are varying stylistic preferences, 'where' is not part of the
expression syntax. If you write a grammar which includes the "natural"
productions

exp -> let defs in exp
exp -> exp where defs

and put this through a suitable generator, you will see why not.

Of course, it is possible that an unambiguous grammar will fail to be
LR(1) - by being non-deterministic, so the proper conclusion is that "G
is ambiguous or non-deterministic". But that is close enough for most
purposes.

Early versions of Hope had both 'let' and 'where' as expressions, and
had three different forms of condtional. Most combinations of these
constructs would interract to create ambiguity. The hand-coded parsers
in use had not picked this up. That shows the advantage of using a
generator - you get a constructive proof that the grammar is in the
desired class.

--brian

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Revamping the numeric classes

2001-02-06 Thread Brian Boutel

Dylan Thurston wrote:
> 

> 
> These sound great to me.  If Haskell/2 is indeed open to such changes,
> would also be possible to revamp the numeric modules?  As a
> mathematician, I get annoyed by such things as
> 
> * (+) and (-) being lumped in with (*) (doesn't anyone use vector spaces?)
> 
> * the function 'atan2' being mixed in with a bunch of operations very
>   specific to the floating point format in the 'RealFloat' class.
>   Same problem (though less serious) with 'quot', etc., and
>   'toInteger' in the Integral class.
> 
> * Superfluous superclasses: why are Show and Eq superclasses of Num?
>   Not all numeric types have decidable equality.  Think arbitrary
>   precision reals.
> 

Haskell was intended for use by programmers who may not be
mathematicians, as a general purpose language. Changes to make keep
mathematicians happy tend to make it less understandable and attractive
to everyone else.

Specifically:

* most usage of (+), (-), (*) is on numbers which support all of them.

* Haskell equality is a defined operation, not a primitive, and may not
be decidable. It does not always define equivalence classes, because
a==a may be Bottom, so what's the problem? It would be a problem,
though, to have to explain to a beginner why they can't print the result
of a computation.


--brian

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: inference question

2000-08-31 Thread Brian Boutel

William Lee Irwin III wrote:

Example> let x = cmethod . fromNat $ 1 in 0
ERROR: Unresolved overloading
*** Type   : (Q a, P a) => Integer
*** Expression : let {...} in 0


The type of x is t ( for some t in Num)

OK. What is t? It's unspecified. You don't actually need it. Too bad.

Type inference depends both on the arguments of a function and on the
use of the result. While functions with types like t->t can determine t
from either, functions with types like Integer -> t need the surrounding
context. If you discard the result (your x is never used) t remains
ambiguous. Haskell regards this as an error.

Does this matter? Well, sometimes it does, and sometimes it is just a
nuisance, but in your case, the definition of x is dead code, and you
are free to delete it. If you were to use x somewhere, your problem
would probably go away.

--brian




Re: unlines: the mystery of the trailing \n

2000-08-07 Thread Brian Boutel


Here is a concern:

At present, a final \n in lines' input is optional, because a line is
ended by either a \n or the end of the string. Consequently lines "a"
and lines "a\n" have the same value ( ["a"] ). This seems a desirable
feature that is worth preserving.

Consider the composition lines.unlines, and what happens when the last
line is empty. unlines ["a", ""] is "a\n\n", and lines correctly
reconstructs the two lines from this. 
With this suggestion, unlines ["a", ""] becomes "a\n", which, unless you
change its behaviour, lines interprets as representing a single line
["a"], and lines.unlines is no longer the identity function.

--brian


Sigbjorn Finne wrote:
> 
> Here's a Prelude inconsistency that's been irking me once
> in a while for a loong time - today it came up again, so here goes:
> 
>   unlines   ["a","b"]   ==> "a\nb\n"
>   unwords ["a","b"]   ==> "a b"
> 
> I like that
> 
>   unwords (ls1 ++ [unwords ls2]) == unwords (ls1 ++ ls2)
> 
> but not that 'unlines' doesnt' obey the same rule, i.e.,
> 
>unlines [line1, unlines [line2,line3]]  /= unlines [line1,line2,line3]
> 
> Is this by design? I notice that 'unlines' mirrors Miranda's 'lay', but
> I'd find it a little more useful without the trailing \n (esp. considering
> now that putStrLn is std.)
> 
> The current defn of 'unlines' doesn't keep me up at night, but still.
> 
> --sigbjorn




RE: Haskell 98: partition; and take,drop,splitAt

2000-03-15 Thread Brian Boutel

> 3.  Manuel points out
>
> I must say that I'm strongly tempted to disallow empty qualifiers
> and make n>=1.  I'm not sure how this change crept in in the first
> place.  Does anyone care?  Urgle.

The report is in a bit of a mess here.
The top of section 3 (the summary of exp syntax) also has n>=1. App B4 has
n>=0.
The translation box in 3.11 clearly defines [e | ] as e, but does not define
[e | ,Q] (which should presumably be [e|Q]), or  [e|q] where q is a single
qualifier.

This originates with H98. H1.4 (which was actually defining monad
comprehensions) had n>=1 and no empty qualifiers, as did earlier versions. I
would be happy to revert.

--brian




RE: unexpected elements

2000-03-04 Thread Brian Boutel

On Sunday, March 05, 2000 10:06 AM, Reuben Thomas 
[SMTP:[EMAIL PROTECTED]] wrote:
>
> ...except there were rounding problems. Floating-point numbers are simply
> difficult. Representing them is bad, displaying them is worse, and there
> are questions I don't know the answer to about how the list is
> calculated: if by repeated addition, rounding errors will build up (it
> looks like that, since 8.0 can certainly be represented in the machine
> accurately, but wasn't in your list), and if by multiplication, they
> should be one-off errors.
>
> My own experiment (running Hugs September 1999 on a Psion Revo) suggest
> that in fact the list is formed by repeated addition, as I get:
>
> [1.0, 1.1,...,6.3,6.4,6.4999,6.5,...,9.28,...,
> 9.998]
>
> (adjust no. of 9's to taste)
>
> which looks like a rounding error is gradually building up. This is a
> silly way to calculate such lists in FP, but perhaps there's a good
> reason? (I suppose speed of calculation might be one!)
>

The Prelude definition of EnumFromToThen for floats is by iterating 
addition. There is also the following note:

-- The Enum instances for Floats and Doubles are slightly unusual.
-- The `toEnum' function truncates numbers to Int. The definitions
-- of enumFrom and enumFromThen allow floats to be used in arithmetic
-- series: [0,0.1 .. 1.0]. However, roundoff errors make these somewhat
-- dubious. This example may have either 10 or 11 elements, depending on
-- how 0.1 is represented.
These things are arithmetic progressions, so one would expect addition by 
fixed steps equal to whatever floating point number most closely 
corresponds to the difference between the floating point numbers denoted by 
"1.0" and "1.1".  This will inevitably produce approximate results. In 
early versions of Haskell, Enum was a subclass of Ix, which prevented Float 
being an instance of Enum (you can't sensibly do range (1.0,2.0)), and this 
kind of problem was avoided.
--brian







> http://sc3d.org/rrt/ | certain, a.  insufficiently analysed





RE: overlapping instances: And a question about newtypes

2000-02-20 Thread Brian Boutel

On Sunday, February 20, 2000 4:13 PM, Fergus Henderson 
[SMTP:[EMAIL PROTECTED]] wrote:
>
> Well, you can always defined a type using Tree which _is_ an instance of 
Ord:
>
>   newtype OrdTree = MkOrdTree Tree
>   instance Ord OrdTree where ...
>
> So I don't see this as a disaster.
>

The example of when you might want to hide an instance (of Read, to prevent 
forgery) is persuasive.

However, how would you prevent the client following the advice given above 
of using a newtype declaration to redefine your type, making that an 
instance of Read, and thereby forging values of your type?

I imagine that the answer is that this cannot be done if the type is 
exported without its constructors**, which would be necessary anyway to 
prevent back-door accesses to your type. But then your suggested workaround 
for my problem is not available.

Actually the situation is worse than I thought. If your module defines a 
type T to be an instance of Ord, but fails to export that instance, then a) 
If I can't see the body code of your module, I probably (depending in the 
interface information) won't even know until link time that I can't declare 
this instance. b)  I will be unable to declare an instance, not just of 
Ord, but also of all the many subclasses of Ord.

** The Haskell report appears to be silent on this, although it seems 
obvious that an instance cannot be declared or derived if the type's 
constructors are not visible. Is it also true that this would also preclude 
declaring a Newtype? Simon?

--brian




RE: overlapping instances

2000-02-19 Thread Brian Boutel

On Friday, February 18, 2000 7:17 PM, Fergus Henderson 
[SMTP:[EMAIL PROTECTED]] wrote:
> It's just a question of information hiding.
> It lets you declare a type to be an instance of a public type class
> without exporting that fact (and hence exporting those methods).
>
> > In Haskell, you can have at most 1 C-T instance, and that is visible in 
> > every module in the program which can be reached by a chain of
> > import/export declarations from the declaring module.
> > In Mercury you can have at most 1 C-T instance, but there may be more
> > places where it is not visible. There does not seem to be anything you 
can
> > do in those places that you could not do if the C-T instance were 
visible
> > there.
>
> You could say the same about e.g. abstract data types.
> The advantage is that the author of module A can ensure that
> module B will not access the private parts of module A.
> Certainly there's nothing more you could do in module B
> than if those parts were public, indeed what you can do is strictly
> less.  But the whole point of making things private is to *restrict*
> what other modules can do with them.
>

Obviously, in general, information hiding is useful. Here, the specific 
question is about instance declarations.Is there any value in being able to 
hide them?

I think the answer is no, for the following reasons:

1) There is complete control of the visibility of Classes and Types. If C 
or T is not visible at a point in the program, the visibility of the C-T 
instance is immaterial. If C and T are both visible, and the C-T instance 
is not, what have you gained? Can you give an example of when you would 
want to do this?

2) In terms of ADTs, Haskell allows (approximations to) ADTs by allowing a 
type to be exported without its constructors. This by itself gives you a 
sort of ADT with only one possible implementation. If you want ADTs where 
more than one implementation is possible, the mechanism is to use a Class C 
and its operators as the ADT, with various Types T as implementation types, 
and the C-T instance functions as the *public* methods. It is essential 
that they are not hidden. Of course, you can still have private methods, 
which are not exported, but these are not part of the Class.

3) If you do have private instances, then a program which needs to import a 
module with such a private C-T instance is prevented from declaring another 
instance for the same C and T (under the rule that allows only one C-T 
instance per program). Suppose, for example, that there is a type of Tree, 
and you have written a module, with some good stuff that I want to use, but 
in which you declare Tree to be an instance of Ord, but don't export it. 
Then I am prevented from using Tree as an ordered type. This would be a 
disaster.

--brian




RE: overlapping instances

2000-02-17 Thread Brian Boutel

On Friday, February 18, 2000 1:46 AM, Fergus Henderson 
[SMTP:[EMAIL PROTECTED]] wrote:
>
> Mercury allows private instances, but it does not allow shadowing.
> For any given class and type, there can only be one instance;
> that instance can be public, or private, but not both.
>
> If you allow shadowing, then you have multiple instance declarations
> for the same class and type, and in that case you do have problems.
> But again, I blame those on allowing shadowing, not on having
> control over when instance declarations are exported.
>
> > Suppose the normal Prelude instance decl for  type Integer as an 
instance
> > of Num is shadowed in Module A by a private Num-Integer instance decl.
>
> That kind of thing is not allowed in Mercury; if there is an instance
> for `Num Integer' (in Mercury syntax it would be `num(integer)')
> in one module, then you're not allowed to have an instance for that
> same combination in any other module in the same program.
>

OK,  I understand now. But what advantage does explicit control of instance 
import/export give you over the current Haskell rule?
In Haskell, you can have at most 1 C-T instance, and that is visible in 
every module in the program which can be reached by a chain of 
import/export declarations from the declaring module.
In Mercury you can have at most 1 C-T instance, but there may be more 
places where it is not visible. There does not seem to be anything you can 
do in those places that you could not do if the C-T instance were visible 
there.

But this is not quite the problem I was addressing, which was having 
multiple C-T instances in the program, but using explicit import/export 
control to ensure that only one was in scope at any place. My point was 
that exported function definitions which depended on a C-T instance could 
not be imported into a module in which a different C-T instance was in 
scope without causing serious problems.

--brian




RE: overlapping instances

2000-02-17 Thread Brian Boutel

On Thursday, February 17, 2000 7:02 PM, Fergus Henderson 
[SMTP:[EMAIL PROTECTED]] wrote:
>>
> Well, I remain unconvinced.  In Mercury, we give the user control
> over whether instance declarations are exported or not, and it
> works quite nicely, IMHO.  I think the problems that you are referring
> to below are only problems if in addition to allowing private instance
> declarations you also allow multiple or overlapping instance 
declarations.
> But I would assign the blame for these problems to multiple/overlapping
> instance declarations, not to having control of when instance
> declarations are exported.

I don't think this is the case. Suppose there is the present Haskell rule - 
only one C-T instance per program - with the addition that local "private" 
instances may be declared, which shadow the global instance in that module. 
This is a simple case, but allowing different  C-T instances in different 
parts of the program provided only one is visible in any scope would  not 
change the argument. Here is an example - I'll use a standard Class and 
Type to so I can use common sybols:

Suppose the normal Prelude instance decl for  type Integer as an instance 
of Num is shadowed in Module A by a private Num-Integer instance decl. 
Module A defines and exports a function

f::Integer->Integer->Integer
f x y = x*x+y*y

>
> > Consider a function, which is exported, and which uses a local 
"private"
> > instance decl. What happens in the importing module?
> > There are several possibilities:
> > 1. The "private" instance decl from the other module is used.
>
> That is what happens in Mercury.

OK, let's go with that option.

>
> > Then referential transparency is lost,
>
> No necessarily...
>
> > because if the name of the imported
> > function is replaced by its definition, any need for the instance will 
not
> > see the "private" instance declaration,
>
> Yes, but the result in Mercury will be a compile (or link) error.

Continuing the example:
Now Module B imports f from A. B has the normal Prelude  Num-Integer 
instance. If f brings with it the private instance - not as an import, but 
purely for its own use, -  any occurence in B of  f a b (a,b ::Integer) 
 will use it, (which you say happens in Mercury), but unfolding f a b to 
a*a+b*b, which referential transparency says should not change the result, 
will use the normal Prelude instance decl (and probably evaluate 
differently), because f is no longer referenced.

How can this produce a compile/link error? Are you saying that it is 
illegal in Mercury to import into a module using one C-T instance, a 
function which was declared in a module using a different C-T instance, 
even  if you don't import the instance, and the only place where it is used 
is in the imported function?

Note that if B doesn't import the Prelude instance, then unfolding f a b to 
a*a+b*b turns a valid program into an invalid one because * and + are 
undefined.

> > and if it finds another instance
> > decl in scope, will use that, and the semantics of the function will be 
> > changed.
>
> In Mercury there can't be another matching instance declaration
> in scope, since Mercury does not allow multiple or overlapping
> instance declarations.  Thus we preserve referential transparency.
>

In my example there is no place with multiple instances in scope. It's 
quite clear everywhere which instance is to be used, the problem is that 
normal semantics-preserving transformations break.

--brian




RE: overlapping instances

2000-02-16 Thread Brian Boutel

On Thursday, February 17, 2000 3:03 PM, Fergus Henderson 
[SMTP:[EMAIL PROTECTED]] wrote:
>>
> If Haskell had explicit imports and exports of instance declarations,
> then I could perhaps buy this argument.  But it doesn't.  In Haskell,
> all instance declarations defined in a module are always exported;
> there's no way to hide instance declarations that are intended to
> be private to the module:
>

This comes up occasionally. There really should be a FAQ for this sort of 
thing.
It is worth reminding people that there is a good reason for this rule - 
not the original 1988 reason, but good enough.

Originally, this was part of the mechanism that ensured that it was 
impossible to get two different instance declarations for the same 
Class/Type in scope at the same place. The other part of that mechanism, 
insisting that instance declarations could only be declared in modules 
where either the Class or Type were declared, and attaching instances to 
the Class and the Type so that they were exported with them, has 
subsequently been repealed, and it is now simply illegal to have multiple 
instance declarations for the same Class/Type.

If the remaining restriction were also abandoned, and explicit control of 
import/export used to prevent having more than one C-T instance in scope, 
or even if it were just possible to declare a "private" C-T instance which 
shadowed any imported instance and was not exported, there would still be a 
problem.

Consider a function, which is exported, and which uses a local "private" 
instance decl. What happens in the importing module?
There are several possibilities:
1. The "private" instance decl from the other module is used. Then 
referential transparency is lost, because if the name of the imported 
function is replaced by its definition, any need for the instance will not 
see the "private" instance declaration,  and if it finds another instance 
decl in scope, will use that, and the semantics of the function will be 
changed.
2. The local "private" decl is not used, and another decl is in scope. Then 
Haskell no longer has static scope, as the semantics of a function now 
depend on the point of use, not on the point of definition. Rather like 
Original Lisp. Nobody defends that anymore.
3. Even without "private" instances, but with explicit import/export 
control, you can get similar cases where a function defined in one scope is 
exported to another scope with a different C-T instance.  And you can't 
even rely on Library functions behaving as expected either, if you are not 
using the instance decl that was in scope in the Library.

--brian




RE: drop & take [was: fixing typos in Haskell-98]

2000-01-26 Thread Brian Boutel

On Thursday, January 27, 2000 2:08 PM, Frank A. Christoph 
[SMTP:[EMAIL PROTECTED]] wrote:

>> My preference is still (B). (A) is not *very* bad, but should really
>> replicate (-7) "foo" be []?
>
>I could say: Sure, why not? replicate suffers from the same domain problem
>as take/drop.

This was not the point of introducing replicate to the discussion. 
The Prelude says, in a comment,

-- replicate n x is a list of length n with x the value of every element

and then defines replicate in terms of take. 

replicate:: Int -> a -> [a]
replicate n x=  take n (repeat x)

There is a clear assumption on the part of the Prelude authors that take behaves in a 
particular
way, corresponding to the law 
length (take n xs ) === n
but it doesn't. So even the authors of the Prelude got caught by take's inconsistency.

While I dislike functions with a simple, obvious intended semantics being extended in 
non-obvious, 
non-simple ways, because it creates just this kind of error, I understand that many 
people are less concerned
about it. I can live with any of the proposed definitions, but do suggest that 
incorrect statements are corrected.

So, if negative values are to be allowed in take, the comment re replicate should say, 

-- if n >=0 replicate n x is a list of length n with x the value of every element

Or, if negative values are *not* to be allowed in take, then fix the present code so 
that they are
not allowed for any list, including [].

--brian





RE: drop & take [was: fixing typos in Haskell-98]

2000-01-25 Thread Brian Boutel

On Wednesday, January 26, 2000 1:52 PM, Fergus Henderson 
[SMTP:[EMAIL PROTECTED]] wrote:
>
> I agree that it is too big a change for Haskell 98.
> But I think it would be too big a change for Haskell 2000 too.
> Making a change like that could cause previously working programs
> to fail, with no warning at compile time, and with the bug showing
> up only on certain inputs.  The cost of that would far outweigh
> the benefit (which, in the light of the mixed opinions here, seems
> dubious anyway).  Making non-backwards-compatible changes to the
> semantics of basic prelude functions like take and drop would be
> a really bad idea, IMHO.
>

Sometimes things are just wrong, and they have to be fixed, however painful 
that is.
I would rather have a definition that makes sense, than one that has always 
been wrong.

This case is not grossly wrong, but it is annoying. Taking and dropping 
negative numbers of values
does not make sense.

What I would like, is to have a decision about the essential properties of 
the functions being
defined, (i.e what laws are satisfied), and then a definition that 
implements those properties.

There has always been a tendency to make minor changes to Haskell to 
satisfy immediate concerns. Not that Haskell is unique in that respect, but 
it should be a polished gem of a language, not wart-encrusted.
What worries me is that these little extensions have other consequences. 
Look at replicate, for example.

-- replicate n x is a list of length n with x the value of every element

replicate:: Int -> a -> [a]
replicate n x=  take n (repeat x)

So, what is the value of replicate -10  1 ?

By way of interest, let's look at the history of take in Haskell

Versions 1.1. and 1.2 have

take   :: (Integral a) => a -> [b] -> [b]
take  0  _   = []
take _   []   = []
take (n+1) (x:xs) = take n xs

These definitions give us
 take -1 [] === 0,  take -1 (x:xs) === bottom

Not very consistent, is it?

The next version I can find a copy of is 1.4. This introduces the change 
that
persists into H98
take   :: Int -> [a] -> [a]
take 0 _   =  []
take _ []  =  []
take n (x:xs) | n > 0  =  x : take (n-1) xs
take _ _   =  error "Prelude.take: negative argument"

I suspect the intentions were a) to eliminate n+k patterns, and b) to force 
the use of
Int for efficiency reasons.
Restricting the type to Int would undoubtedly have broken some programs, 
but that did not
justify not doing it.
This version has the same behaviour wrt negative argument values.

At the very least we have an inconsistancy in the treatment of negative 
arguments, which should be fixed.


--brian




RE: drop & take [was: fixing typos in Haskell-98]

2000-01-25 Thread Brian Boutel

On Wednesday, January 26, 2000 9:12 AM, Joe Fasel [SMTP:[EMAIL PROTECTED]] wrote:
> 
> 
> The call some have made for the tightest possible error
> checking also has merit, however.  That would suggest
> these definitions:
> 
> > takeExactly 0 _ = []
> > takeExactly (n+1) (x:xs) = x : takeExactly n xs
> > takeExactly _ _ = undefined
> 
> > dropExactly 0 xs = xs
> > dropExactly (n+1) (_:xs) = dropExactly n xs
> > dropExactly _ _ = undefined
> 
> > splitAtExactly n xs = (takeExactly n xs, dropExactly n xs)
> 
> I would say that the more loosely-defined functions definitely
> belong in the library and that it is a matter of taste whether
> or not to include the tighter ones.

We have seen various proposals about what laws should hold wrt
take and drop. I think there is a reasonable presumption that the 
following  very simple laws should hold first:

length (take n xs) === n
length (drop n xs) === length xs -n

This supports Joe's takeExactly/dropExactly definitions.

--brian




RE: Haskell 98: partition; and take,drop,splitAt

2000-01-24 Thread Brian Boutel

On Tuesday, January 25, 2000 10:00 AM, Joe English [SMTP:[EMAIL PROTECTED]] wrote:
> 
> Is the filter/filter definition semantically equivalent to:
> 
> partition p xs = foldr select ([],[]) xs
> where
> select x ~(ts,fs)   | p x   = (x:ts,fs)
>   | otherwise = (ts, x:fs)
> 
> (that is, the current definition in the Library report
> with an extra twiddle added)?
> 
> Operationally, the 'foldr' version makes half as many
> calls to 'p' as the 'filter/filter' version, so the former
> may be preferable if the two are in fact semantically
> equivalent.
> 

I think it is better for the report to use a simple definition like filter/filter 
to illustrate the semantics. Real implemetations may then use any
more efficient definition that preserves them.

--brian




RE: fixing typos in Haskell-98

2000-01-24 Thread Brian Boutel

On Tuesday, January 25, 2000 8:38 AM, S. Alexander Jacobson [SMTP:[EMAIL PROTECTED]] wrote:
  Why not do what python does?

drop -2 -- drops the last 2 elements from the list
take -2 -- grabs the last 2 elements from the list

take n list | n<0 = drop (length list + n) list
drop n list | n<0 = take (length list + n) list

[...]
I think this solution also satisfies Chris Okasaki's:  

take n xs ++ drop n xs == xs

(except where the list is infinite)

try it:
take -2 [1,2,3,4] -> [3,4]
drop -2 [1,2,3,4] -> [1,2]
take -2 [1,2,3,4] ++ drop -2 [1,2,3,4] -> [3,4,1,2]

--brian




Reverse Composition and Preprocessor Discussions

1999-10-10 Thread Brian Boutel

On Sunday, 10 October 1999 00:09, Lennart Augustsson [SMTP:[EMAIL PROTECTED]] 
wrote:
> Marcin 'Qrczak' Kowalczyk wrote:
> 
> > Sat, 9 Oct 1999 12:42:20 +1300, Brian Boutel <[EMAIL PROTECTED]> pisze:
> >
> > > Be careful. '<-' is two symbols. Replacing it by one symbol can change the
> > > semantics of a program by affecting layout.
> >
> > No, because only the indent before the first non-whitespace character
> > in a line matters. Haskell programs can be typeset even in proportional
> > font as long as indents have correct relationships between their
> > lengths.
> 
> You must be using a different Haskell than I am.  :-)
> Consider these two fragments:
> a = x + y where x = 1
>  y = 1
> vs.
> a = x ++ y where x = 1
>  y = 1
> 
> They have very different syntactical meaning.
> 

It occurs to me that this signals a problem with the macro preprocessor proposals. 
Macro expansion can affect relative indentation, and therefore create errors which 
would
be hard to find because the expanded form is not normally visible.

This strongly suggests either
a) Restrict preprocessing to whole-line inclusion/exclusion (conditional compilation), 
including
#define, #ifdef, #elseif, #else, etc, or
b) Allow general macro expansion, but do the "pre-"processing after layout processing. 

--brian









RE: Reverse composition

1999-10-09 Thread Brian Boutel



On Saturday, 9 October 1999 12:00, Clifford Beshers [SMTP:[EMAIL PROTECTED]] 
wrote:
> 
> But we do have bitmapped displays, lots of fonts, graphical
> applications, etc.  Perhaps augmenting JH/SPJ's pretty printer to
> generate LaTeX or PostScript with real symbols would be a good first
> step.  Augmenting the emacs modes to use other symbols would be
> another.  Or just biting the big bullet and making a customized
> editor.
> 
> For each of these users could supply a list of translations, e.g.,
> 
>[ x^2 | x <- [1..10] ]
> 
> would become real LaTeX with a superscripted 2 and <- would be a real
> set element symbol.
> 

Be careful. '<-' is two symbols. Replacing it by one symbol can change the 
semantics of a program by affecting layout. You can't guarantee that
a prettyprinted program will still be the same program.

If the supply of suitable Ascii symbols seems inadequate, remember
that Haskell uses Unicode. There is no reason to limit symbols to those in 
the Ascii set. 

--brian







RE: Announce: Functional Graph Library

1999-08-25 Thread Brian Boutel

The link on this page to the paper "Functional Programming with Graphs"
is incorrect. The Erwig  directory is omitted.

-brian

On Wednesday, 25 August 1999 18:25, Martin Erwig [SMTP:[EMAIL PROTECTED]] 
wrote:
> 
> In response to several requests, I have put a
> preliminary Haskell version of the Functional
> Graph Library (FGL) on the Web:
> 
>   http://www.informatik.fernuni-hagen.de/pi4/erwig/fgl/
> 
> --
> Martin





RE: View on true ad-hoc overloading.

1999-05-20 Thread Brian Boutel


On Thursday, May 20, 1999 3:58 PM, Kevin Atkinson [SMTP:[EMAIL PROTECTED]] 
wrote:
> True ad-hoc overloading can lead to unreadable programs if it is
> misused.  However it can make code more readable and concise if used
> properly.
>

I can't disagree with this. But I can object to it.  The words "used 
properly" ask more than it is
reasonable to expect.

What follows is not directed at Kevin, but is a general plea for the 
defence of Haskell.

Programs for real world applications and programs which will be read by 
other people (or even by the author
some time after writing them) must be correct, clear and unambiguous. 
Unambiguity implies not just that the compiler will not be confused, but 
that Joe Programmer will not be confused when reading a program to fix a 
bug or to modify it
to reflect a change in requirements. To allow ad hoc overloading is to 
invite errors resulting from that kind of confusion.
This is why Haskell, from the beginning,  has had type classes but not a  
rbitrary overloading.

Language design includes human factors issues. If you want a language so 
full of neat features that it will accept
anything the compiler can make sense of, try PL/1. Or (if you are old 
enough to remember) write your name in Teco,
and try to work out what it means. We tried to learn from the early 
experience of language design. Haskell is
an advance on previous languages.

Haskell is a general purpose language. It is not a specialised language for 
doing mathematics. It should not be
a language that can only be written and understood by specialist experts. 
If we believe that functional languages
are superior and want them adopted widely, we must ensure that they are 
attractive to and usable by people in the computer industry. Perhaps even 
Microsoft people :-).

If you want a different language, tailored to your special needs, then 
define it and implement it - write a pre-processor
to convert it to Haskell if you like, but please do not try to change the 
essential character of Haskell.

--brian








RE: View on true ad-hoc overloading.

1999-05-20 Thread Brian Boutel


In response to a question about ad hoc overloading:

On Thursday, May 20, 1999 9:10 AM, Nigel Perry [SMTP:[EMAIL PROTECTED]] wrote:
>
> So to answer the question: it can be done, by a simple existance proof :-)
> 

Of course it *can* be done, but *should* it be done?

Uncontrolled overloading means that when you see a function application you can't
immediately see what function is being applied - you see its name but not its 
semantics, because
there may be many different functions with the same name. 

Obfuscating the program source in this way presents a risk of error, and is bad 
language engineering.

What would be gained by allowing ad hoc overloading? If operations on different types 
have similar meaning
there is a case for defining a new class. If you have existing different functions 
with similar names you can 
qualify them to avoid the ambiguity. When else would you want this feature?

--brian


Brian Boutel
Phone +64  4 9386709 Fax +64 4 9386710  Mobile 021 410142
[EMAIL PROTECTED]






RE: non-linear patterns

1999-05-05 Thread Brian Boutel

As far as I remember, this was considered by the original Haskell committee 
in 1988.
The argument then against non-linear patterns was that, in the interests of 
equational reasoning, it
was desirable to define a function using disjoint cases, and there was no 
way of defining,
using a pattern, all the values that do not match the non-linear pattern.

The suggested translation of non-linear patterns using guards is a very 
simple case and does not  obviously generalise. For example, given the 
left-to-right semantics of pattern matching, with
g x x 1 = e1 ; g  x y z = e2
h x x' 1 | x==x' = e1 ; h x y z = e2
should
g 1 2 bottom
and
h 1 2 bottom
have the same value?

I think that g 1 2 bottom should be e2, and h 1 2 bottom should be bottom. 
 A possible translation of
g  would then be
g x y z = let
g' x y | x == y = v
 = \ z -> e2
v 1 = e1 in
  in  g' x y z

Suppose we had g x x x = e1  Given that == is not required to be 
transitive for every user-defined overloading, it would seem that 3 
equality tests would be necessary!

My view is that non-linear pattern are a succinct way of defining a very 
limited class of constraints,
but probably do not buy enough to justify including in Haskell.

--brian


On Wednesday, May 05, 1999 9:16 AM, Peter Thiemann 
[SMTP:[EMAIL PROTECTED]] wrote:
> A friend and I recently discussed why patterns in Haskell are
> restricted to be linear. He found it unintuitive due to his background
> in term rewriting and logic. And he noted that it is easy to compile
> away as in:
>
> f (x, x) = e
> ==>>
> f (x, x') | x == x' = e
>
> It is also easy to transform away in list comprehensions:
>
> (x, x) <- e
> ==>>
> (x, x') <- e, x == x'
>
> My main argument against it was a language design issue, namely that
> suddenly x is required to have an Eq type which cannot be explained by
> looking at its uses in e.
>
> Another problem is that comparing x with x' makes this kind of pattern
> matching super-strict (since x may be reduced to normal form).
>
> Can someone enlighten me on other arguments for or against non-linear
> patterns?
>
> NB:
> If I remember the Haskell98 discussion correctly, there was a
> discussion on Monad and (the now dead) MonadZero, where the MonadZero
> appeared "magically" in the context, whenever someone used (refutable)
> patterns in the do-notation. This discussion (which was resolved by
> hacking class Monad and dropping class MonadZero) is imho related to
> the question raised above; in both cases, the use of some language
> feature changes/restricts the type.
>
> -Peter





Re: why I hate n+k

1998-12-01 Thread Brian Boutel


Craig Dickson wrote:
> Why do you find this makes a significant difference? Personally, I find
> 
> f x = ... f (x - 1)
> 
> much more intuitive than
> 
>f (x + 1) = ... f x
> 
> I see no advantage in the n+k version.
> 
> 

I agree. n+k patterns make sense for a type of Natural Numbers (including 0), 
but not for general Integral types.

They are also dangerous because they are defined in terms of < and -, which, 
in a user-defined type, need not obey the usual laws, e.g. you cannot assume 
that 0 < 1 holds.

The problem is that dropping them would break lots of stuff - but probably 
more textbooks than programs.

--brian






Re: pattern guards and guarded patterns

1997-05-01 Thread Brian Boutel


> 3.  I think it's quite important to use "<-" rather than "=".
>   a) it means that the syntax is precisely that of list comprehensions
>   b) there's an "=" in the definition already, as Andy points out:
>   simplify (Plus e e') | let (Val 0) = s  = s'
>| let (Val 0) = s' = s
>| otherwise= Plus s s'
>where
>s  = simplify e
>s' = simplify e'
> 
>   now there are too many "=" signs to parse easily.  
>
The proposal was to substitute qual for exp^0 in the syntax of guards, 
implying that the use of let decllist in a guard is legal. If it is legal, 
people will use it. I take it that "too many "=" signs to parse easily" refers 
to a human reader, not to a machine. There is no ambiguity in the syntax.


>   c) Furthermore, "let" can introduce multiple 
>  mutually-recursive bindings,
>  and that leads to all the "which order to test" problems 
>  that I outlined earlier.
> 
>Point (b) might even suggest disallowing the let form. Under my proposal
>I can write this:
> 
>   foo x | let y = x+1 = y+1
> 
>It's a bit silly, because I can also use let or where in this 
>situation, but it's not ambiguous so I don't see any particularly
>good reason to disallow it.
> 

One can write ugly code in any language. The concrete syntax here is not 
ideal, but the proposal is too good to lose simply because of that. The 
problem is the old one that there are just not enough graphics to go round.

>CONCLUSION: "let" should be allowed, but should introduce multiple,
>mutually-recursive bindings.  If any are pattern bindings then they
>are matched lazily, and failure to match is a program error.  Exactly
>as for ordinary let/where bindings, and let bindings in list
>comprehensions.
> 

Agreed.

--brian







Re: A new view of guards

1997-04-30 Thread Brian Boutel

I'm quite comfortable with the idea. Guards are part of the lhs of an 
equation, and that is where binding takes place.

The <- syntax worries me a bit, because in the comprehension use it has a 
different type, but the let syntax is available, and one can write
"let p = e"  "for p <- e". I think that, to reduce possible confusion, I would 
use, and teach, the let form. Given that, I don't object to the funny use of 
<-.

--brian







Defining Haskell 1.3 - Committee volunteers wanted

1993-09-20 Thread Brian Boutel



Joe Fasel, John Peterson and I met recently to discuss the next step in
the evolution of Haskell.

While there are some big issues up ahead, (adding Gofer-like constructor
classes, for example), these should be considered for the next major
revision, Haskell 2.0.

For now, we want to be less ambitious, and produce a definition of
Haskell 1.3.

Topics on the agenda include:

Monadic IO
Strict data constructors
Records (naming field components)
Prelude hacking
Standardizing annotation syntax

We think the best way to proceed is to call for volunteers to form 
a new committee to do the work on this.

So, who's interested?

--brian





Re: n+k patterns, etc.

1993-05-28 Thread Brian Boutel



(I sent a similar message a few days ago which got lost somewhere)

We have tried to express the semantics of some Haskell constructs by giving
a translation into "Kernel" Haskell (Report section 1.2).

This leads to difficulties because free variables in the translations can
be captured by the context in which the construct is used.

We have tried to use the importation rules applying to Prelude and
PreludeCore to ensure the desired behaviour, but this is insufficient and
unclear. Specifically

1) Everything exported by PreludeCore is implicitly imported into every
module, and cannot be renamed or redefined at the top level. This
covers standard classes, including their member functions, and
types, including the operators used in the translation of n+k patterns,
which means that these always refer to member functions of standard
classes, except perhaps in inner scopes where names used in the translation
have been locally rebound. It is intended that the Prelude meanings of
locally rebound names should be used in the translation but there is
nothing to enforce this.

2) Things exported by Prelude and not by PreludeCore are implicitly
imported into every module unless Prelude is explicitly imported, when they
can be subject to renaming or hiding.  Despite this, we want names used in
translations to refer to the Prelude entities even though these might not
be visible at that point of the program because they are not imported, or
renamed and the names reused, or locally redefined. 

I think that we should try a different approach, forget about the importing
mechanism, and make a single statement defining the intended semantics.

Section 1.2 (The Haskell Kernel) is the place. I propose adding the
following. 

The translations given, and the identities given for the semantics of
case expressions, are not macros. A simple replacement of the
right-hand-side for the left-hand-side with substitution of parameters does
not give the intended semantics. The reason for this is that the
translations make use of certain names defined in the standard prelude (see
section 5.4), and macro substitution could result in the capture of these
names by locally defined entities, or the use of a name in a context in
which it is not defined at all because the part of the prelude in which it
is defined has not been imported. The general rule is: the use of a name
defined in the standard prelude in a translation intended to show the
semantics of a construct always implies the definition in the standard
prelude.


Then, people can locally rebind as much as they want, but the constructs
defined by translation will be unaffected.

The syntax makes it clear that the  + and - used in patterns are not the
same as the varops denoted by these symbols, so are unaffected by
rebinding. I suppose a note could be added pointing this out if absolutely
necessary.


--brian




Re: Pattern Binding

1992-05-27 Thread Brian Boutel

Norman Graham says:

In section 4.4 of the Haskell Report (v1.2), a pattern binding of the form

p | g1 = e1
  | g2 = e2
   ...
  | gm = em
  where { decls }

is given the translation

p = let decls in
if g1 then e1 else
if g2 then e2 else
...
if gm then em else error "Unmatched pattern"

This strikes me as a bit odd: It says that only the guards determine
which e to bind to p. To my mind, the e's should have some say
in the matter also. If g1 is True and p = e1 fails to match, then
I would expect the pattern matching to continue with the match
'| g2 = e2'.

This seems to be based on the assumption that the guard expressions do not
involve the pattern variables.

OK, lets start with that.

The report says that (p21) "a guard is ... evaluated only after all of the
arguments have been successfully matched..."  and this appears to apply to
pattern bindings even though they do not have "arguments".

Based on this I would have expected the match of p to e1 to have been done
first, then, if it succeeded, the guard g1 to be checked. Now it is quite
reasonable, if either the match or the guard failed, to continue with the
next option.

This is not what is implied by the given translation. It's also a problem to
give a translation based on the alternative semantics. So, did we mean it to
be the way it is, and get the translation wrong?

I can't see anything in the report to limit guards to mention only free
variables, but if pattern variables are used in the guards, big problems
arise with the current translation.

If we put the pattern binding in context so that there is an expression

let p | g1 = e1 ... where {decls} in e

then we have the translation

let p = fix ( \^p -> let {decls} in if g1 then e1 else ...) in e.

Taking an example,

let (x,y) | (x==y) = (1,1)  in x

translates eventually to

let z = fix (\ ^(x,y) -> if x==y then (1,1) else error "..." in
let x' = case z of (x,y) -> x in
let y' = case z of (x,y) -> y in x'

and I believe that the required fixed point is undefined.
(I tried various examples in gofer and hbc)

It's quite plausible that this should have the meaning that could be
expressed as "let (x,x) = (1,1) in x" if we had non-linear patterns. The
statement  in the report about the environment of the guard being the
environment of the right-hand-side appears to allow this kind of construct.

On the other hand, a translation of the kind suggested above, would work,
because the matching of (1,1) to (x,y) would be done before the x==y check.

So I think we need a different semantics for pattern bindings to the
translation given in the report. I don't know how to express it, though.

--brian