RE: [Haskell] deriving with newtypes

2004-04-06 Thread Brandon Michael Moore


On Fri, 2 Apr 2004, Simon Peyton-Jones wrote:

> Your word is my command. 'Tis done.
>
> Simon
>
> | -Original Message-
> | From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> On Behalf Of Wolfgang
> | Jeltsch
> | Sent: 21 March 2004 17:55
> | To: The Haskell Mailing List
> | Subject: [Haskell] deriving with newtypes
> |
> | Hello,
> |
> | I'm trying to use GHC's deriving mechanism for newtypes in the
> following way:
> | class C a b
> | instance C [a] Char
> | newtype T = T Char deriving C [a]
> | Unfortunately, this isn't possible. Is there a reason for this? Can I
> | circumvent this restriction?
> |
> | Wolfgang

It looks like this lets you use partially applied type classes in a
deriving clause, always apllying that class to the new type last. This
looks nice, but wouldn't work so well if your newtype was supposed to go
first

class C a b
instance Char [a]
newtype T = T Char deriving C ??

Maybe a deriving clause should allow full instance heads instead as well
as class names, so you could write "deriving C T [a]". (Maybe with
some restrictions, like ensuring the new type appears, or is one of the
class arguments).

It seems more regular to allow you to derive an instance of a
multi-paramater typeclass with your class in any position rather than just
the last.

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] What is the best way to write adapters?

2004-03-11 Thread Brandon Michael Moore


On Thu, 11 Mar 2004 [EMAIL PROTECTED] wrote:
>
> Thanks! Oleg.
>
> This works and it looks nice!
>
> And now, my code can be like:
>
> class FwdSig d where
>   (forall a. Sig a => a -> w) -> d -> w
>
> All the types that supports such forwarding are instances of FwdSig.
>
> My Def type is:
>
> instance FwdSig Def where
>   fwd f (ClassDef c) = f c
>   fwd f (ProtDef p) = f p
>
> instance Sig Def where
>   getName = fwd getName
>   getMethods = fwd getMethods
>   ...
>
> My Native type is:
>
> instance FwdSig Native where
>   fwd f (NativeSignature s) = f s
>   fwd f (NativeProtocol p) = f p
>
> instance Sig Native where
>   getName = fwd getName
>   getMethods = fwd getMethods
>   ...
>
> Many annoying forwarding functions are gone.
>
> The only thing that I hope to be better is this "getXXX = fwd getXXX" piece
> of code. Is it possible to reuse the same piece of code for both Native and
> Def and any other possible types?

I'm not as handy with the type system as Oleg, but I can help out here.

The problem with your new instance is that if the compiler is trying to
see if Native is an instance of Sig, it can start with the declaration Sig
Native, or the declarations FwdSig a => Sig a, both of which could
potentially derive an instance of Sig Native.

Additionally passing the -fallow-overlapping-instances flag will permit
you to compile a program where instances overlap like this, and will
select the most specific matching instance (looking only at the head).
Your code is fine.

Brandon

> Inspired by your generic code, I wrote such thing:
>
> instance FwdSig d => Sig d where
>   getName = fwd getName
>   getMethods = fwd getMethods
>   ...
>
> However, my ghc complains about the use of "Sig d".
>
> I followed its recommendation and put -fallow-undecidable-instances flag
> with the surprise that the "FwdSig d=>Sig d" instance declaration conflicts
> with my other "instance Sig XXX" declarations.
>
> Surely this is not a serious problem, I can live with repeating the
> "getXXX=fwd getXXX" several times. Just curious about how further this can
> go.
>
> Ben.


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Re: [Haskell-cafe] matching constructors

2004-03-08 Thread Brandon Michael Moore
I think the generics approach really is overkill here, but it's nice to
know the generics library.

For option processing Tomasz Ziolonka described a nice technique
in the post I refered to. You can find the post in the archives at
http://www.haskell.org//pipermail/haskell/2004-January/013412.html

The big example at the end of his post seems to have exactly the otpion
structure you want, with input, output, a verbose flag, and a (composable)
selection of filters to use.

The basic idea is to make a record containing the options in their most
useful form and make each options descriptor (I assume you are using
(System.Console.)GetOpt here) return a function that transforms an option
record to reflect that option. Now to handle the list of values you get
back you just apply each transformer in turn to the default options.

It somewhat resmbles building up option values in a collection of mutable
variables, although of course values are rather more flexible in Haskell
than most other languages, and the "state" is encapsulated and well
behaved.

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: [Haskell] Per-type function namespaces (was: Data.Set whishes)

2004-02-27 Thread Brandon Michael Moore


On Fri, 27 Feb 2004, Simon Peyton-Jones wrote:

> > The idea that I've been throwing around is to be able to define a
> > separate namespace for each type; a function can either belong in a
> > "global" (default) namespace, or belong in a particular type's
> > namespace.  So, in the above example, instead of writing "addToFM fm
> > ...", we could instead associate an 'add' function with the FiniteMap
> > type, so we could write "fm.add ..." instead.  Provided that fm's type
>
> > is monomorphic, it should be possible to call the 'correct' add
> > function; if we defined another 'add' function that's associated with
>
> Remember, too, that in OO languages the type of 'fm' is usually
> declared, in advance, by the programmer.  In Haskell it isn't.   That
> makes it much harder to figure out which 'add' function is going to be
> used.
>
> Which 'add' function is chosen depends on type type of 'fm'.  But the
> add function that is chosen in turn influences the type of the other
> arguments.  For example, in the call (fm.add foo), the type of 'foo' is
> influenced by the choice of 'add'.  But the type of 'foo' might (by the
> magic of type inference) affect the type of 'fm'
>
> In Haskell today, you can at least tell what value is bound to each
> identifier in the program, *without* first doing type checking.  And the
> binding story, all by itself, is somewhat complicated.  The typing story
> is also (very) complicated.  Winding the two into a single indissoluble
> whole would make it far more complicated still.

I thought this wasn't the case if there are type classes invovled. What
value is "+" bound to in 1 + 1? All I can think is to say that the
appropriate value of + is selected based on the types, or to say that the
value here is the class member (subsuming several instances). Either way
I don't see a method for overloading individual function names having a
greatly different story either way.

Actually, picking a version of a function (from the versions in scope)
based on which type actually works might be useful. It seems to extend the
handling of overlapping names in a useful direction again, resolving
ambiguity by assuming you meant to write a typeable program.

We would probably want some special syntax with the imports to
request/flag this behaviour, like "import A; import B; import C; resolve
foo". One heuristic would be typechecking with no information on the
name(s) and checking that there is a unique way to resolve the ambiguity
at each point.

> My nose tells me that this way lies madness.

I think the general principle of using types to capture and infer intent
is still sound. It would be nice to have ad-hoc overloading also in cases
where we don't see a common intent between several functions to capture
with a typeclass (intents that we can't capture are arguments for
improving the class system).

A lot of haskell already looks like madness already anyway :)
We just need to find things that look like good madness ;)

>
> But I've been wrong before.
>
> Simon

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Per-type function namespaces (was: Data.Set whishes)

2004-02-26 Thread Brandon Michael Moore
On Fri, 27 Feb 2004 [EMAIL PROTECTED] wrote:

> On 27/02/2004, at 1:13 PM, [EMAIL PROTECTED] wrote:
>
> 1) now I have to manually declare a class definition for every single
> function, and I have to declare it in advance before any module defines
> that function (most serious problem; see below),
>
> 2) I then have to declare instances of that type class for every
> function I define,
>
> 3) the type signature for phase reveals no clues about how to use that
> function.

Declaring a type class instance is really no problem. You just need to
write an "instance Class (Type)" instead of "function :: Type" on the line
before the function declaration. The type on "phase" itself wouldn't
provide much information, but the list of instances in each module defines
would be informative. Something like :info wouldn't be much help without
modification.

> So unfortunately, this is hardly a scalable solution.  The entire
> reason I came up with the idea is because if we use type classes to
> implement this sort of overloading, we have to know every single
> possible function that any module author will ever create, and declare
> classes for those functions in advance.  This is fine if you're
> declaring truly polymorphic functions which are designed from the start
> to be totally general, but it is not designed for functions which may
> do vastly different things and may contain totally different type
> signatures, but share the same name because that would be a sensible
> thing to do.  (e.g. the phase function mentioned above.)

In the paper "Object-Oriented Style Overloading for Haskell", Mark Shields
and Simon Peyton-Jones. One of the things they propose is adding method
constraints to the type system which (as far as I can tell) basically
amounts to generating a type class for each funtion name, and letting
you write constraints like (foo :: Int -> Int) on your function.

They would set up the type classes like "class Has_foo a where foo :: a",
which can causes problems if your argument and return value are
polymorphic under a class constraint rather than concrete types. Making
the method classes implicitly closed would probably help here. (closed
classes are another suggestion). While making that closed world assumption
it would probably be nice if it only selected between the versions of the
function that were actually in scope at the moment (so these would act
kind of like methods that overload if you import several of them, rather
than conflicting like normal).

As long as we are integrating these special type classes into the language
we can make sure things like error messages and ghci give decent
information, maybe listing all the different types the function is
imported at, and where each version is defined.

> With the per-type namespace separation I'm advocating, you do not need
> to know and declare in advance that each function "will be" overloaded,
> you simply write a FiniteMap.add function and a Set.add function, and
> you get a simpler form of namespace separation (or overloading) based
> on the first parameter that's passed to it.  It is a solution which is
> more _flexible_ than requiring type class definitions, and it is better
> than having hungarian notation for functions.  In fact, I think that,
> right now, if we replaced the current namespace separation offered by
> the hierarchical module system, and instead only had this sort of
> per-type namespace separation, things would still be better!

How much of the structure of the first paramater would you look at? Could
you an implementation for pairs that depended on the actual types in the
pair? I think you should try to take advantage of the existing type class
machinery as much as possible here, even if what you want are not exactly
(standard) type classes.

> I realise my idea isn't very general in that it only allows this
> namespace lookup/overloading based on the type of a single argument
> parameter, and I think it would be possible with a bit more thinking to
> generalise it to work based on multiple arguments (e.g. via
> argument-dependent lookup, or whatnot).  But even in its current form,
> I honestly think it offers far more flexibility and would lead to
> cleaner APIs than is currently possible.

Read the paper and see if you think something like that might be useful.
In any case, I think there's a decent chance that something useful for
this would also be useful for building interfaces to object-oriented
libraries, and vicea versa. I think there's probably something that covers
both cases nicely and uniformly.

Brandon

> --
> % Andre Pang : trust.in.love.to.save
> ___
> Haskell mailing list
> [EMAIL PROTECTED]
> http://www.haskell.org/mailman/listinfo/haskell
>
>

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: type classes, superclass of different kind

2003-12-11 Thread Brandon Michael Moore


On Thu, 11 Dec 2003, Robert Will wrote:

> Hello,
>
> As you will have noticed, I'm designing a little library of Abstract Data
> Structuresm here is a small excerpt to get an idea:
>
> class Collection coll a where
> ...
> (<+>) :: coll a -> coll a -> coll a
> reduce :: (a -> b) -> b
>   -> coll a -> b
> ...
>
> class Map map a b where
> ...
> (<+) :: map a b -> map a b -> map a b
> at :: map a b
>   -> a -> b
> ...
>
> Note that the classes don't only share similar types, they also have
> similar algebraic laws: both <+> and <+ are associative, and neither is
> commutative.
>
> Now I would like to have Collection to be a superclass of Map yielding the
> following typing
>
> reduce :: (Map map a b) =>
>   ((a, b) -> c) -> c
>   -> map a b -> c

Functional dependencies will do this.

class Collection coll a | coll -> a where
...
(<+>) :: coll -> coll -> coll
reduce :: (a -> b -> b) -> b -> coll -> b
...

class (Collection map (a,b)) => Map map a b | map -> a b where
...
(<+) :: map -> map -> map
at :: map -> a -> b

Now you make instances like

instance Collection [a] a where
   (<+>) = (++)
   reduce = foldr

instance (Eq a) => Map [(a,b)] a b where
   new <+ old = nubBy (\(x,_) (y,_) -> x == y) (new ++ old)
   at map x = fromJust (lookup x map)


> Note that in an OO programming language with generic classes (which is in
> general much less expressive than real polymorphism), I can write
>
> class MAP[A, B] inherit COLLECTION[TUPLE[A, B]]
>
> which has exactly the desired effect (and that's what I do in the
> imperative version of my little library).

This isn't exactly the same thing. In the OO code the interface
collections must provide consists of a set of methods. A particular
type, like COLLECTION[INTEGER] is the primitive unit that can implement
or fail to implement that interface.

In the Haskell code you require a collection to be a type constructor that
will give you a type with appropriate methods no matter what you apply
it to (ruling out special cases like extra compace sequences of booleans
and so on). A map is not something that takes a single argument and makes
a collection, so nothing can implement both of your map and collection
interfaces.

The solution is simple, drop the spurrious requirement that collections
be type constructors (or that all of our concrete collection types were
created by applying some type constructor to the element type). The
classes with functional dependencies say just that, our collection type
provides certain methods (involving the element types).

Collections were one of the examples in Mark Jones' paper on
functional dependencies ("Type Classes with Functional Dependencies",
linked from the GHC Extension:Functional Dependencies section of the
GHC user's guide).

> There seems to be no direct way to achieve the same thing with Haskell
> type classes (or any extension I'm aware of).  Here is a quesion for the
> most creative of thinkers: which is the design (in proper Haskell or a
> wide-spread extension) possibly include much intermediate type classes and
> other stuff, that comes nearest to my desire?
>
> I believe this question to be important and profound.  (We shouldn't
> make our functional designs more different from the OO ones, than they
> need to be.)  If I err, someone will tell me :->

What problems do objects solve? They let you give a common interface to
types with the same functionality, so you can make functions slightly
polymorphic in any argument type with the operations your code needs.
They organize your state. Then let you reuse code when you make a new
slightly different type. Am I missing anything here?

I think type classes are a much better solution than inheritance for
keeping track of which types have which functionality. (at least the way
interface by inheritance works in most typed and popular object oriented
languages.)

Inheritance only really works for notions that only involve the type doing
the inheriting, or are at least heavly centered around that type. I don't
think Functor can be represented as an interface, or at least not a very
natural one. Most langauges I know of (see Nice for an exception)  also
require you to declare the interface a class supports when you declare it,
which is really painful when you want your code to work with types that
were around before you were, like defining a class to represent
marshallable values for interface/serialization code.

Are there any advantages to inheritance for managing interfaces? Maybe
it takes a few minutes less to explain the first time around. It's
probably easier to implement. Beyond that, I see nothing. Any creative
thinkers want to try this? (An answer here would motivate an extension
to the type class system, of course).

Brandon

> Robert

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listi

Re: Functional dependencies interfere with generalization

2003-11-27 Thread Brandon Michael Moore


On Wed, 26 Nov 2003, Ken Shan wrote:

> Hello,
>
> Consider the following code, which uses type classes with functional
> dependencies:
>
> {-# OPTIONS -fglasgow-exts #-}
> module Foo where
> class R a b | a -> b where r :: a -> b
>
> -- 1
> rr :: (R a b1, R a b2) => a -> (b1, b2)
> rr a = (r a, r a)
>
> -- 2
> data RAB a = RAB (forall b. (R a b) => (a, b))
> mkRAB :: (R a b) => a -> b -> RAB a
> mkRAB a b = RAB (a, b)
>
> Neither 1 nor 2 passes the type-checker (GHC 6.0.1).  The error messages
> are similar:

I agree that the typechecker could handle this better, but I don't see why
you should need types like this. You should be able to use

rr :: (R a b) =>  a -> (b,b)

and

data RAB a = forall b. (R a b) => RAB (a,b)

equally well, and these typecheck.

I think the root of the problem is the foralls. The typechecker doesn't
realize that there is only one possible choice for thse universally
quantified values based on the functional dependencies. For rr it
complains because you can't allow every b2, just b2 = b1, not realizing
that that is already implied by the class constraints. Similarly for RAB
it complains because the pair you give it is obviously not unviersally
polymorphic in b, not realizing that there is only one choice for b
consistent with the class constraints. Compare this code:

class R a b where r :: a -> b

rr :: (R a b1, R a b2) => a -> (b1, b2)
rr x = let rx = r x in (rx, rx)

and

data P a = P (forall b. (a,b))

Off the top of a my head, the solution to this problem would probably be
something like ignoring foralls on a type variable that is determined by
fundeps, but I think the type system would need some sort of new
quantifier or binding construct to introduce a new type variable that is
completely determined by its class constraints. Something like forall a .
constrained b. (R a b) => a -> (b, b). A forall binding a variable
determined by fundeps could be reduced to a constrained binding, which
would be allowed to do things like unify with other type variables.

I'm not sure anything really needs to be done. I think you can always
type these examples by unifying the reduntant type variables in a
signature by hand, and by using existentially quantified data types
rather than universally quantified ones. Do you have examples that
can't be fixed like this?

Brandon


>
> Inferred type is less polymorphic than expected
> Quantified type variable `b2' is unified with another quantified type 
> variable `b1'
> When trying to generalise the type inferred for `rr'
> Signature type: forall a b1 b2.
> (R a b1, R a b2) =>
> a -> (b1, b2)
> Type to generalise: a -> (b1, b1)
> When checking the type signature for `rr'
> When generalising the type(s) for `rr'
>
> Inferred type is less polymorphic than expected
> Quantified type variable `b' escapes
> It is mentioned in the environment:
>   b :: b (bound at Foo.hs:17)
> In the first argument of `RAB', namely `(a, b)'
> In the definition of `mkRAB': mkRAB a b = RAB (a, b)
>
> In both cases, the compiler is failing to make use of functional
> dependencies information that it has at its disposal.  Specifically,
> it seems to me that, if two type variables b1 and b2 have been unified
> due to functional dependencies, making two constraints in the context
> identical, then the inner constraint ("inner" with respect to the scope
> of quantified type variables) should be ignored.
>
> Is there a technical reason why the type checker should reject the code
> above?  Would it be possible to at least automatically define a function
> like
>
> equal :: forall f a b1 b1. (R a b1, R a b2) => f b1 -> f b2
>
> for every functional dependency, with which I would be able to persuade
> the type checker to generalize (Baars and Swierstra, ICFP 2002)?  I
> suppose I can use unsafeCoerce to manually define such a function... is
> that a bad idea for some reason I don't see?
>
> Thank you,
>   Ken
>
> --
> Edit this signature at http://www.digitas.harvard.edu/cgi-bin/ken/sig
> Tax the rich!
> new journal Physical Biology: http://physbio.iop.org/
> What if All Chemists Went on Strike? (science fiction):
> http://www.iupac.org/publications/ci/2003/2506/iw3_letters.html
>

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: a type question

2003-11-26 Thread Brandon Michael Moore
It depends what sort of polymorphism you want theta to have.
If your function types involve concrete types you could write something
like

type IntMap = Int -> Bool -> String

and then say
theta :: IntMap -> Int -> String

If you want the argument function to be completely polymorphic you can say
type FunMap = forall a b c . a -> b -> c

But then you couldn't use functions of type IntMap because they are not
completely polymorphic.

If you want the type variables to be bound in the theta type the only
option is what Lennart suggests,

type Funcmap a b c = a -> b -> c
theta :: Funcmap a b c -> d -> e

Trying this out:

type Intmap = Int -> Bool -> String
type Funcmp1 = forall a b c . a -> b -> c
type Funcmp2 a b c = a -> b -> c

f1 :: Intmap -> a -> b
f2 :: Funcmp1 -> a -> b
f3 :: Funcmp2 a b c -> d -> e
(f1,f2,f3) = undefined

We see the types we can get:

*Main> :t f1
f1 :: forall b a. (Int -> Bool -> String) -> a -> b
*Main> :t f2
f2 :: forall b a. (forall a1 b1 c. a1 -> b1 -> c) -> a -> b
*Main> :t f3
f3 :: forall e d c b a. (a -> b -> c) -> d -> e

The IRC channel is pretty good for this sort of question and might have a
better turnaround time that the list.

Brandon

On Wed, 26 Nov 2003, rui yang wrote:

> Thanks.
> What I really want to know is:
> How to describe a new type (Funcmap) which is itself a function type like a->b-
> >c so that I can use this new type in other functions? for example, I can
> define some other functions like:
>
> theta  :: Functionmap -> d ->e
>
> minus  :: Funcmap ->..
>
> I can put all the things there like :
> theta  :: (a->b->c)->d->e
> but sometimes it's not convenient to do so.
>
> rui
>
>
>
> 引用 Lennart Augustsson <[EMAIL PROTECTED]>:
>
> > rui yang wrote:
> > > Suppose I have a function:
> > >
> > >  funcmap  :: a->b->c
> > >
> > > can I use type synonyms to describe a new type like this:
> > >  Type Funcmap  = a->b>c ?
> >
> > First, it's 'type' not 'Type'.
> > Second, you want '->' not '>'.
> > Third, all type variables in the RHS must be on the LHS.
> > So, we get
> >
> > type Funcmap a b c = a->b->c
> >
>
>
>
>
> 
> This mail sent through www.mywaterloo.ca
> ___
> Haskell mailing list
> [EMAIL PROTECTED]
> http://www.haskell.org/mailman/listinfo/haskell
>
>

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


type class problem / GHC bug

2003-11-08 Thread Brandon Michael Moore
Hi everyone

I've built GHC from CVS and I'm getting some odd errors about overlapping
instances. This is different from 6.0.1, but it's not obvious it is wrong,
so I'm probably missing something here.

The example is

class A x
class (A x) => B x
instance A x
instance B x

The new GHC complains that the second instance overlapps with the first.
Maybe because of the context on B x the instance for B x is interpreted as
a claim we have A x too, but shouldn't it be the other way, that you need
an instance A x from somewhere along with an instance for B x?

Also, I tried to update and rebuild, but the makefiles seem to have the
dependencies wrong or something. I compiles THSyntax.hs by hand, then ran
into some trouble with files that needed some modules from GHCI trying
(and dying) to build before the ghci files.Is there a guide to building
GHC from CVS anywhere? I had the same problem with alex, but that's
small enough to build by hand.

Thanks

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Polymorphic Recursion / Rank-2 Confusion

2003-09-21 Thread Brandon Michael Moore

On Sun, 21 Sep 2003, Dominic Steinitz wrote:

>
> Brandon,
>
> I get the error below without the type signature. My confusion was thinking
> I needed rank-2 types. In fact I only need polymorphic recursion. Ross
> Paterson's suggestion fixes the problem. I stole Even and Odd from Chris
> Okasaki's paper on square matrices. They relate to fast exponentation
> algorithm. There's something to be said for Zeror and One; as you say they
> give the structure in binary.

I would guess you knew you needed a forall, but missed the implicit one
out front. I'm glad you got this working. I'm surprised this didn't
typecheck though. I usually put signatures on top level functions, so I
suppose my intuition is more tuned to types that can be checked rather
than types that can be inferred. Can anyone tell me what's wrong with the
following derivation?

join :: (a -> [c]) -> (b -> [c]) -> ((a,b) -> [c])
join f g (x,y) = f x ++ g y

collect_ colv colw (Zero v) = colv v
collect_ colv colw (Odd v) = collect_ colv (join colw colw) v
collect_ colv colv (Even v) = collect_ (join colv colw) (join colw colw) v

for the first equation, name the type of collect_
collect_ :: t
name the arguments and unify collect_ with the argument types
colv :: a
colw :: b
(Zero v) :: Vector v w
t = a -> b -> Vector v w -> d
The type of v follows from the data type definition
v :: v
The body is an application (colv v), so we get
a = c -> d, c = v

so far we have
collect :: (v -> d) -> b -> Vector v w -> d
which uses up all the constraints.

In the next equation v has type
v :: Vector v (w,w)
b = (e -> [f]) (from the type of join)
for the recursive call to collect, we get constraints like
d[(w,w)/w] = d
(e->[f])[(w,w)/w] = ((e,e) -> [f])
We can deduce that w is not in the free variables of d or f,
and that e = w.

Now we have the type
collect :: (v -> d) -> (w -> [f]) -> Vector v w -> d

In the last equation the use of join lets us deduce
that d = [f], giving a final type
collect :: (v -> [a]) -> (w -> [a]) -> Vector v w -> [a].

What did I do wrong here? Probably the constraints between unification
varaibles. Is there a problem with that sort of reasoning? I think I'm
probably expecting some sort of implicit quantification that I haven't
really specified.

> My motivation in using this type was to see if, for example, I could
> restrict addition of a vector to another vector to vectors of the same
> length. This would be helpful in the crypto library where I end up having to
> either define new length Words all the time or using lists and losing the
> capability of ensuring I am manipulating lists of the same length.

I've vaugely heard about something called Cryptol that Galois connections
wrote, that compiles to Haskell. I don't know about the licensing status
though.

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Polymorphic Recursion / Rank-2 Confusion

2003-09-20 Thread Brandon Michael Moore
Sorry about the empty message. Send /= Cancel

> Can anyone tell me why the following doesn't work (and what I have to do to
> fix it)? I thought by specifying the type of coalw as rank-2 would allow it
> to be used both at a and (a,b).

Frank explained why the type you gave wouldn't work. I would like to add
that your function would type check without the type signature. This
suggests something here is actively confusing. Do you have any idea what
caused this problem?

I hope to help teach Haskell to first year students, so I'm trying to
figure out what parts of the language are hard to get, and how to usefull
explain things. Anything in pure H98 that trips up an experienced
programmer is worth some attention.

Completely unrelated, I think Zero and One would be better names than Even
and Odd because then the string of constructors writes the size of the
vector in binary, LSB first. I can't see any mnenomic value to Even and
Odd. How do you interpret Even and Odd?

Thanks

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Polymorphic Recursion / Rank-2 Confusion

2003-09-20 Thread Brandon Michael Moore
On Sat, 20 Sep 2003, Ross Paterson wrote:

> On Sat, Sep 20, 2003 at 12:01:32PM +0100, Dominic Steinitz wrote:
> > Can anyone tell me why the following doesn't work (and what I have to do to
> > fix it)? I thought by specifying the type of coalw as rank-2 would allow it
> > to be used both at a and (a,b).
>
> Change the signature to
>
>   coal_ :: (v -> [a]) -> (w -> [a]) -> Vector_ v w -> [a]
>
> Then you can define
>
>   type Vector = Vector_ ()
>
>   coal :: Vector a -> [a]
>   coal = coal_ (const []) (:[])
> ___
> Haskell mailing list
> [EMAIL PROTECTED]
> http://www.haskell.org/mailman/listinfo/haskell
>
>

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Syntax extensions: mdo and do...rec

2003-09-17 Thread Brandon Michael Moore
Sorry, I forgot the main question I was raising.

Even if we need something other than mdo, do we need to make a distinction
between do and mdo? If left tightening is satisfied then do and mdo are
equivalent for nonrecursive blocks. If we are willing to give up shadowing
a compiler could translate recursive blocks with mfix and non-recursive
blocks without. Personally I don't like shadowing, and especially don't
like reursive bindings some places and shadowing in others.

On the necessity of rec syntax, how is a statement like
 do rec binds1
rec binds2
stmts
different from
 do BV1 <- mdo binds1
   return BV1
BV2 <- mdo binds2
   return BV2
stmts
where BVn is a tuple of all the variables bound in bindsn.

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Syntax Extensions (and future Haskell)

2003-09-17 Thread Brandon Michael Moore

On Thu, 18 Sep 2003 [EMAIL PROTECTED] wrote:

> Although a number of comments in this discussion make some sense,
> I personally am getting worried about the direction that it is taking.
> I have been a (fairly quiet) Haskell user for some time. I like it
> because of the strong connection to standard mathematical constructs,
> and the dedication to equational reasoning. I was dubious when Monads
> were introduced, but grew to be comfortable with that as an approach
> to embedding temporal characteristics, I try not to use do, I like the
> fact that I don't have to. But, recent developments on this list suggest
> that this is all going to be a thing of the past. There is a real danger
> that Haskell will just turn into yet another procedural language.

What worries you? I don't see what configuration has to do with procedural
languages. The elegant solution everyone is looking for would make it
easier for the language to evolve, but I don't see why it would become
more procedural. If anything most of the current extensions move Haskell
even farther from normal procedural languages. I'm curious what procedural
tendencies you see. I agree monads are becoming very popular, but I don't
equate monads and imperative languages. Maybe IO and ST are imperative.

> Firstly I want to bring into the open something that you would all be
> aware of. It is possible to change any typical procedural imperative
> language into a functional language by a change of point of view.
> All you do is just state that each command in the imperative is a
> function that operates on the state of the machine. The entire program
> becomes a composition of these functions. Control statements such as
> for-do-while-until-repeat-if, simply become higher order functions.
> That's why the monad stuff works, all it does is demand that you
> explicitly admit to the passing of the state.
>
> If you keep going the way you are going you will simply turn
> Haskell into a procedural language with some snazzy data-types.
> It would survive, perhaps, but in name only.
>
> The importance of Haskell is not that it is Functional, this is
> in reality just a technicality of point of view, the importance
> of Haskell is the WAY in which it is functional, the emphasis
> it puts on the manner in which the total function is decomposed.
> It is important not simply that Haskell admits equational reasoning,
> but that it is equational reasoning that a human can do, not something
> that requires a heavy duty code transformer to work out.
>
> At least this is how I see it, perhaps this is simply the
> point at which Haskell and I part company?
>
> I also don't see the point of the language configuration pragmas
> either. Uniformity is important. Instead of agreeing to disagree,
> and coming up with a rag-bag language, the points need to be nutted
> out until they make consistent sense. The pragmas do not represent
> any uniformity, they actually represent a schism, you can't agree
> so you are splitting the language into incompatible variants. It
> solve the political problem of in-fighting, but only by letting
> the in-fighting destroy the language. It's the same sort of thing
> that splits C++ into multiple camps, the same thing that gives the
> multiple levels of comments in comments for postscript, and so on.

I agree with Henrik here. I see two main purposes to configuration,
backwards compatibility for continuting support of the standard and of
legacy code, and the freedom to experiment with the language without
committing to extensions. I see multiparameter type classes and explicit
quantification as part of the current langauge, and expect others feel the
same way. I expect implicit parameters to remain experimental until we
really understand them, for example.

> Remember Flon's law. The fact that you CAN do something does not
> mean that it is a good idea.
>
> I'll get off my soap box now.

If it gives you a useful perspective, and gets your insights back to the
rest of us, stay up there for a while. We can always use good ideas.

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Syntax extensions: mdo and do...rec

2003-09-17 Thread Brandon Michael Moore
On Thu, 18 Sep 2003, Ross Paterson wrote:

> The arguments being made here can all be found in the recursive monad
> bindings papers and Levent's thesis.

I don't remember anything about finding smaller binding groups in the
mdo paper. I don't think I've read Levent's thesis.

> On Wed, Sep 17, 2003 at 11:41:24AM -0700, Brandon Michael Moore wrote:
> > In any case, I don't see the need for explicit rec groups. Can't GHC just
> > find the strongly connected components like it already does with let
> > bindings?
>
> That's what GHC and Hugs do now for mdo (actually segments rather than
> components, because actions can't be rearranged).
>
> > Don't the laws for loop and mfix justify the transformation?
>
> The loop axioms do, but Levent didn't assume right tightening, which
> corresponds to moving bindings down from a rec, because monads like
> exceptions don't satisfy it.  The same would go for a loop defined on
> an exception arrow.  And that's the biggest problem with implicit
> segmentation: you need to understand what it does to work out the
> meaning of your program.  Again there's an example in those papers
> and Levent's thesis.

I expected any problems would be like that. I remember hearing about a
fixpoint operator for the continuation monad that satisfied all the laws
but right tightening. Well, this would fall under "If it really turns out
to be frequently necessary".

> BTW, in GHC 6.2 with the -fglasgow-exts -farrows flags, you will be able
> to use either mdo or do...rec for monads and for arrows, as an experiment.
> (Maybe "rec" wasn't such a great keyword to take from the identifier
> space.)

When can we expect 6.2?

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Syntax extensions: mdo and do...rec

2003-09-17 Thread Brandon Michael Moore

> hello,

> i have no strong feelings about that either way,
> however since in haskell we do not have "let" vs "let rec" distinctions,
> perhaps we should not have "do" vs "do rec" distinction.
> this of course would break programs relying on shadowing
> (and at least i write quite a few of those, but that is mostly habit).
> i doubt that this will cause many backward compatability problems,
> as one can compile old modules (not using recursive do) without
> the flag enabling recursive dos.
>

I think you have a great idea here.

The problem with that is that not all monads support recursive bindings.
We can deal with that, but the desuagring of a do statement wouldn't be so
trivial any more. The most sensible approach I can think of is to analyze
the bindings and generate a translation involving mfix if the bindings are
recursive, and a translation without if they are not. I don't like
complicating the translation, but at least it is still purely syntactic.

I'm also worried about making the typing of a statement depend on the
binding structure, rather than just the types of subexpressions and how
they are used. It's probably a bit simpler than the normal mental
typechecking we do, but it is different. If we do this we should try for
nice warnings about recursively defined variables if failures to staisfy
the MonadFix constraint suggest the do statement was wrong.

In any case, I don't see the need for explicit rec groups. Can't GHC just
find the strongly connected components like it already does with let
bindings? Don't the laws for loop and mfix justify the transformation? If
you really need to specify the binding groups (or at least provide an
upper bound on their size) you can use explicit nested mdo statements.

I don't think we should force the programmer to explicitly identify groups
of recursive bindings. It's probably not even worth providing synatx
beyond nested mdo statements uness is is frequently necessary. I agree
with Iavor that we should try for simplicity and consistency. Are there
any gaping holes in my musings on his proposal?

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Question about implementation of an "information-passnig" arrow

2003-09-16 Thread Brandon Michael Moore


On Mon, 15 Sep 2003, Yu Di wrote:

> data MyArrow a b = MyArrow ((String, a) -> (String,
> b))
>
> i.e. there is an "information" asscioated with each
> piece of data (represented by the string), and I want
> to pass it around. And often the arrow's processing
> logic will depend on the input information, therefore
> a monad-style
>
> data MyArrow a b = MyArrow (a -> (String, b))
>
> will not work.
>
> Now I have a problem with the definition of "pure" and
> "first". At first, I declared
>
> pure f = MyArrow (\(s, x) -> (s, f x))
> first (MyArrow f) = MyArrow (\(s, (x, y)) -> let (s',
> z) = f (s, x) in (s', (z, y)))
>
> this seems to work, but then I begin to have problems
> with the "data-plumbing" pure arrows, e.g. in
>
> pure (\x -> (x, x)) >>> first someArrow  pure
> (\(_, x) -> x)
>
> Ideally, this arrow will preserve whatever information
> I put there for the input, but because "first
> someArrow" will change the WHOLE information
> associated with the pair of result, I can't find any
> way to let "pure (\(_, x)->x)" (which is an extremely
> generic function) retrieve the PART of information for
> the second piece in the pair tuple.
>
> I thought about this a lot, but it seems to me that
> the only way to solve this is to somehow make the
> information "lookupable" from the data itself, not
> "placed beside" the data, but how I can do that? And
> can there be some other solution to this?
>
> Thanks very much!
>
> Di, Yu

What are you trying to do here? From the type of the arrow you are trying
to define it looks like pure functions on (String,a) pairs should work
just as well. Whatever you are doing, I would guess that the tagging is
fairly orthogonal to the use of arrows. I think you can probably get by
with pure functions on pairs of strings and values, with a few lifting
combinators.

For example, if I assume that the tags are used so functions can add
comments on the values as they pass through you might write something like

type Annotated a = (String,a)
comment :: String -> Annotated a -> Annotated a
comment newcomment (comments,val) = (newcomment++comments,val)

liftAnn2 f (c1,a) (c2,b) = (c1++c2,b)
plusAnn x y = comment "Added two numbers"

Now you can define functions like
addThree x y z = plusAnn x (plusAnn y z)
and run compuatations like
addThree ("One",1) ("Two",2) ("Three",3)
and get results like
("Added two numbersOneAdded two numberTwoThree",6)

Obviously the policy for combining tags is pretty bad, but you could fill
in whatever you wanted.

I must say I'm pretty dubious though. It sounds like you intend to compute
over these tagged values and combine them. The only sensible use of string
tags on values passed freely around that I can think of is building up a
representation of the computation that produced the value, like the R.hs
module by Claus Reinke. To do that one tag per arrow is exactly what you
want. For most other uses I think the tag should probably be a data
structure rather than a string. If the values are sitting in a data
structure (like an association like) it's a different story, of course.
I could help more if I had a better idea what your purpose is.

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: Circular Instance Declarations

2003-09-14 Thread Brandon Michael Moore


On Thu, 11 Sep 2003, Simon Peyton-Jones wrote:

> OK, I yield!
>
> The HEAD now runs this program.  It turned out to be a case of
> interchanging two lines of code, which is the kind of fix I like.
>
> Simon

Cool! Yet another domain where haskell handles infinities quite happily.
Thanks.

Hopefully I'll have some code to contribute soon.

  Brandon

>
>
> | -Original Message-
> | From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
> Behalf Of Ashley Yakeley
> | Sent: 07 September 2003 06:57
> | To: [EMAIL PROTECTED]
> | Subject: Circular Instance Declarations
> |
> | When -fallow-undecidable-instances is switched on, is there any reason
> | why circular instances are forbidden? For instance:
> |
> |  module CircularInsts where
> | {
> | data D r = ZeroD | SuccD (r (D r));
> |
> | instance (Eq (r (D r))) => Eq (D r) where
> | {
> | ZeroD == ZeroD = True;
> | (SuccD a) == (SuccD b) = a == b;
> | _ == _ = False;
> | };
> |
> | newtype C a = MkC a deriving Eq;
> |
> | equalDC :: D C -> D C -> Bool;
> | equalDC = (==);
> | }
> |
> | When I compile this, I get this:
> |
> |  $ ghc -fglasgow-exts -fallow-undecidable-instances -c
> CircularInsts.hs
> |  CircularInsts.hs:2:
> | Context reduction stack overflow; size = 21
> | Use -fcontext-stack20 to increase stack size to (e.g.) 20
> | `Eq (C (D C))' arising from use of `==' at CircularInsts.hs:16
> | `Eq (D C)' arising from use of `==' at CircularInsts.hs:16
> | `Eq (C (D C))' arising from use of `==' at CircularInsts.hs:16
> | `Eq (D C)' arising from use of `==' at CircularInsts.hs:16
> |
> | Would it be reasonable for the compiler to check back through the
> stack
> | and allow the circularity? It will just create an ordinary recursive
> | function.
> |
> | --
> | Ashley Yakeley, Seattle WA
> |
> | ___
> | Haskell mailing list
> | [EMAIL PROTECTED]
> | http://www.haskell.org/mailman/listinfo/haskell
>
>
> ___
> Haskell mailing list
> [EMAIL PROTECTED]
> http://www.haskell.org/mailman/listinfo/haskell
>
>
>

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Circular Instance Declarations

2003-09-10 Thread Brandon Michael Moore
On Wed, 10 Sep 2003, Ashley Yakeley wrote:
>  Brandon Michael Moore <[EMAIL PROTECTED]> wrote:
>
> > A simple irregular type is
> > Irr a = Con a (Irr (F a))
> > (as long as F uses a)
>
> Would this be an irregular type, with F as ((->) val)?
>
>   data SymbolExpression sym val a = ClosedSymbolExpression a |
>OpenSymbolExpression sym (SymbolExpression sym val (val -> a));

This would be an irregular type. In my proposal an instance declaration
deriving some instance of SymbolExpression sym val a could use the types
sym val and a in the context, but not (val -> a) which would only arise
from unfolding the type constructor. Of course when I say "proposal" I
mean "Would be a proposal if only I could prove that last lemma".

> I used to use this in HScheme for expressions with free variables, as in
> the lambda calculus. For instance, "\x.xy" has "y" as a free variable,
> and might be represented as something like this:
>
>   OpenSymbolExpression "y" (ClosedSymbolExpression (\y -> (\x -> x y)))
>
> It's very clean and safe, and can be made an instance of
> FunctorApplyReturn, but it turned out to be a bit slow. I also tried
> this:
>
>   data ListSymbolExpression sym val a =
>  MkListSymbolExpression [sym] ([val] -> a);
>
>   MkListSymbolExpression ["y"] (\[y] -> (\x -> x y))
>
> This is much simpler, but now one has to make sure that the lists are
> the same size, so to speak. But this one turned out to be the fastest:
>
>   newtype FuncSymbolExpression sym val a =
>MkFuncSymbolExpression ((sym -> val) -> a);
>
>   MkFuncSymbolExpression (\f -> (\x -> x (f "y")))
>
> The downside is that there's no way to find out what the free variables
> are. That's OK for Scheme, however, since Scheme doesn't complain about
> unbound variables until run-time.
>
> So, um, any excuse to talk about HScheme anyway.

It looks like your scheme puts the type system to good use. I used a value
type with numbers, Val->Val functions, and some other stuff. I gave up
when I realized I needed to thread references through everything to
implement R5RS. I suppose everyone has started a Scheme in Haskell
sometime.

Brandon


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Request for Instances

2003-09-10 Thread Brandon Michael Moore
Hi everyone.

 I've been looking at the restrictions on instances in the H98 standard
and thinking about alternatives. I would like to have a body of data type
and class/instance declarations so I can test how useful various
extensions would be. Please send or direct me to code that requires
-fallow-undecidabe-instances.

Thanks.
  Brandon


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Type Class Problem

2003-09-10 Thread Brandon Michael Moore
Hello everyone

I think I'm close to useful results on the instance restrictions.

First there's an obvious extension to the Haskell98 rule. The H98 rule
says the instance head must be a type constructor applied to type
variables, and the context must mention only those type variables. This
gives a termination proof by counting constructors. If the rule is
weakened to allow an arbitrary type expression in the head and require
that the context mention only strict subexpressions of the head, the same
proof still applies. I'm not sure how useful this is, but we might as well
allow it.

Second, I have half a result in the direction of allowing the context to
mention types that arise from applying type constructors used in the
instance head.

This requires accepting regular derivations, which means the compiler
would need to track all previous goals while deriving an instance, and
handle a second occurance of a goal by producing a reference to the
dictionary for the first occurance (which may not be constructed yet),
rather than blindly continuing the derivation.

First I will explain the proof method. It's related to structural
induction, but not quite the same. Suppose we have a subexpression
relation on type expressions such that every type expression has only a
finite number of subexpressions. If instance contexts only mention
subexpressions of the head, then searching for an instance for a type can
only generate #of subexpressions*#of classes distinct goals.  Therefore,
in finite time either the derivation will fail, or we will product a
regular derivaiton. Alternately, we only try to derivive an instance the
first time it arises as a goal, so each time we apply an instance rule
there is one less goal in the pool of possible goals, which must
eventually be exhausted.

The syntactic subexpression relation obviously has these properties, but
it's often useful to refer to types that show up when we apply type
constructors.

For example, my case and a simplification of Ashley's:
data Mu f = In (f (Mu f))
instance C (f (Mu f)) => C (Mu f)

On the other hand, we can't unfold all type constructors because some
types are irregular, or, we encounter an infinite number of types while
expanding the type constructors.

Define a kind indexed family of predicates on type constructors, R_K(T),
where the property is true if T::K, T is regular (including expanding the
insides of lambdas), and if K=K1->K2, then R_K2(T t) for all t such that
R_K1(t). Say a type is regularity preserving if it satisfies the predicate
corresponding to its kind. Any type expression build entirely from
regularity preserving type constructors will be regular. I think that a
subexpression relation that allows expanding applications
regularity preserving type constructors will give any type a finite number
of subexpressions, but I don't know enough about the structure of
regularity preserving type constructors to prove it.

The missing half here is an algorithm for testing whether a type
constructor is regularity preserving. For this the body of the type
constructor can be simplified to consist of just the type constructor
applications in the body. Apply the type constructor to skolem arguments,
and check whether the resulting tree is regular. I don't know how to do
this.

Another approach is to draw out a dependency graph between type
constructors, with an edge from A to B for each use of B in the definition
of A, labeled with the arguments used. Then the question is whether
starting from out type applied to tyvars we can find some path through the
graph that generates an infinite number of types, where we keep track of
the current node and the current arguments, and modify the arguments as
directed by a label when moving along an edge. I don't know if the search
can even a tail repeating path that witnesses the irregularity, let alone
a family of paths that can be tested.

Any assistance here would be appreciated.

Thanks
Brandon


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Circular Instance Declarations

2003-09-10 Thread Brandon Michael Moore

On Sun, 7 Sep 2003, Ashley Yakeley wrote:

> In article <[EMAIL PROTECTED]>,
>  Brandon Michael Moore <[EMAIL PROTECTED]> wrote:
>
> > Detecting circularity in a derivation is equivalent to accepting a regular
> > infinite derivation for instances. Would you have a use for irregular
> > derivations?
>
> Could you give me an example?

I should have asked whether you needed irregular types, and "undecidable"
instances for irregular types.

I'm close to a proof that will justify more permissive instances for
regular types (plus a bit), but I haven't made much progress on irregular
types. I'm wondering if anyone actually uses them, let alone
fancy instances for them. Also, if I tried to expand my approach to
irregular types it would require generating dictionaries a runtime, rather
than just defining dictionaries recursively.

In case the word irregular is the problem I'll give my definition, and how
I'm applying it to types. The definition is from Pierce, in "Types and
Programming Languages".

An irregular tree is a tree with an infinite number of distinct subtrees.

When I say a type is irregular I mean the infinite trees you get when you
(recursively) expand all the applications of type constructors is
irregular.

A simple irregular type is
Irr a = Con a (Irr (F a))
(as long as F uses a)

This expands to something like >, where 
denotes a sum type. Each right child is like the parent with an extra F
everywhere, so the tree is irregular.

The sort of instance I'm interested in is something like
instance (Eq a,Eq (Irr (F a)) => Eq (Irr a)
where the context only mentions (subexpressions of) type expressions
encoutered while expanding the type.

Are you using anything like this?

Brandon


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Circular Instance Declarations

2003-09-07 Thread Brandon Michael Moore
Hi Ashley

  See the thread "Type Class Problem". In his post on Aug 22 Simon
Peyton-Jones said that it shouldn't be hard to implement, and mentioned
that it would ruin the property that dictionaries can be evaluated by
call-by-value. I couldn't puzzle out enough of the type class system to
make the change on my first try, and since then I've been looking for a
more general solution

Actually, I'm surprised someone else has a use for this. I wanted
circular instances for playing with the paper "Recursion Schemes from
Comonads". What are you trying to do?

Detecting circularity in a derivation is equivalent to accepting a regular
infinite derivation for instances. Would you have a use for irregular
derivations?

Brandon

On Sat, 6 Sep 2003, Ashley Yakeley wrote:

> When -fallow-undecidable-instances is switched on, is there any reason
> why circular instances are forbidden? For instance:
>
>  module CircularInsts where
> {
> data D r = ZeroD | SuccD (r (D r));
>
> instance (Eq (r (D r))) => Eq (D r) where
> {
> ZeroD == ZeroD = True;
> (SuccD a) == (SuccD b) = a == b;
> _ == _ = False;
> };
>
> newtype C a = MkC a deriving Eq;
>
> equalDC :: D C -> D C -> Bool;
> equalDC = (==);
> }
>
> When I compile this, I get this:
>
>  $ ghc -fglasgow-exts -fallow-undecidable-instances -c CircularInsts.hs
>  CircularInsts.hs:2:
> Context reduction stack overflow; size = 21
> Use -fcontext-stack20 to increase stack size to (e.g.) 20
> `Eq (C (D C))' arising from use of `==' at CircularInsts.hs:16
> `Eq (D C)' arising from use of `==' at CircularInsts.hs:16
> `Eq (C (D C))' arising from use of `==' at CircularInsts.hs:16
> `Eq (D C)' arising from use of `==' at CircularInsts.hs:16
>
> Would it be reasonable for the compiler to check back through the stack
> and allow the circularity? It will just create an ordinary recursive
> function.
>
> --
> Ashley Yakeley, Seattle WA
>
> ___
> Haskell mailing list
> [EMAIL PROTECTED]
> http://www.haskell.org/mailman/listinfo/haskell
>
>





___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: Type class problem

2003-08-30 Thread Brandon Michael Moore
On 28 Aug 2003, Carl Witty wrote:

> On Thu, 2003-08-28 at 13:10, Brandon Michael Moore wrote:
> > Unfortunately I don't have a useful syntatic condition on instance
> > declarations that insures termination of typechecking. If types are
> > restriced to products, sums, and explicit recursion, then termination is
> > ensured if we restrict the assumtions of a declaration to instances for
> > subexpressions of the type we are declaring an instance for (there are
> > only a finite number of subexpressions times a finite number of classes,
> > and one is added each time we apply a rule). I haven't made any progress
> > if type operators are allowed, and I don't have any simple check whether a
> > Haskell type expression can be represented without type operators. I
> > was hoping to get normalization of type expressions from the simple
> > kinding, but recursive operator definitions break that.
Rather, regularity of the resulting types, or something like that.
We can always evaluate a type expression to head normal form, but the
complete expansion of a type can be irregular.

> I think some of David McAllester's papers from about 1990-1994 may be
> relevant here.  He has several papers on deciding when sets of inference
> rules are terminating, or terminating in polynomial time.  (He applies
> this in the context of automated theorem proving, but it should apply
> perfectly well to type class inference as well.)
>

Thanks, this is interesting work. I've read "New Results on Local
Inference Relations", and skimmed a few other papers. Too bad I can't see
how to use it. Determining locality seemed to require a global analysis,
and superficial rules look too restrictive for instance declarations. Some
of the ideas could probably be adapted to prove termination (and bounds)
for sets of rules if the anteceedents of rules mention only subterms of
the conclusion. It's pretty trivial to prove that regular terms have
regular derivations if any, but I haven't looked for good bounds.

It looked to me like most of the results assumed that terms were finite,
but most of it should carry over to regular terms. I don't think it would
be easy to extend to irregular terms, even if I had a good
characterication of Haskell types. Does anyone know of any results in that
direction? Simple kinds give you head normalization, but I don't know how
to describe the sorts of terms that end up as constructor arguments as you
evaluate type expressions. I want some reasonable characterization of the
sort of trees you get when you evaluate type expressions completely. Does
anyone know of papers or books on this?

Thanks
Brandon


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: Type class problem

2003-08-28 Thread Brandon Michael Moore

On Fri, 22 Aug 2003, Simon Peyton-Jones wrote:

>
> Brandon writes
>
> | An application of Mu should be showable if the functor maps showable
> types
> | to showable types, so the most natural way to define the instance
> seemed
> | to be
> |
> | instance (Show a => Show (f a)) => Show (Mu f) where
> |   show (In x) = show x
> |
> | Of course that constraint didn't work
>
> Interesting.  This point is coming up more often!  You'll find another
> example of the usefulness of constraints like the one in your instance
> decl in "Derivable Type Classes" (towards the end).
> http://research.microsoft.com/~simonpj/Papers/derive.htm
>
> Valery Trifonov has a beautiful paper at the upcoming Haskell workshop
> 2003 that shows how to code around the lack of universally quantified
> constraints.  I strongly suggest you take a look at it, but it doesn't
> seem to be online yet.
>
>
> | Constraint Stack Overflow:
> |   Observable (N (Mu N))
> |   Observable (Mu N)
> |   Observable (N (Mu N))
> |   Observable (Mu N)
> |   ...
> |
> | I expected that that constraint solver would realize it could
> construct a
> | dictionary transformer with a type like Observer Nat -> Observer Nat
> and
> | take the fixed point to get an instance for Nat. I haven't looked at
> the
> | implementation, but it seems like it would be easy to add the
> constraint
> | we are trying to derive to the list of assumptions when trying to
> | construct all the anteceedents of an instance declaration.
>
> That's true, I believe, but
> a) it's a bit fragile (a sort of half-way house to solving the halting
> problem)
> b) at the moment dictionaries have the property that you can always
>   evaluate them using call-by-value; if they could be recursively
>   defined (as you suggest) that would no longer be the case
>
> Mind you, GHC doesn't currently take advantage of (b), so maybe it
> should be ignored.  Adding the current goal as an axiom would not be
> difficult, but I don't have time to do it today!  Is anyone else
> interested in such a feature?

I would like to try making this change, but I couldn't puzzle out enough
of the type class system the last time I looked. I would appreciate
advice, references, or even just a list of the relevant modules.

With regard to a), taking our goal as an axiom while searching for a
derivation can be expressed in language that sounds less ad-hoc: accept
regular instance declarations.

Unfortunately I don't have a useful syntatic condition on instance
declarations that insures termination of typechecking. If types are
restriced to products, sums, and explicit recursion, then termination is
ensured if we restrict the assumtions of a declaration to instances for
subexpressions of the type we are declaring an instance for (there are
only a finite number of subexpressions times a finite number of classes,
and one is added each time we apply a rule). I haven't made any progress
if type operators are allowed, and I don't have any simple check whether a
Haskell type expression can be represented without type operators. I
was hoping to get normalization of type expressions from the simple
kinding, but recursive operator definitions break that.

On the other hand, allowing implications in a context is more general.
Adding the conclusion of a rule as an axiom while trying to derive the
context is equivalent to modifying every declaration
instance (ct1,ct2,ct3) => goal
to read
instance (goal=>ct1,goal=>ct2,goal=>ct3) => goal.
The methods defined in the instance should still typecheck, if we use the
context and the conclusion of the instance declaration when checking the
method definitions.

It's worth noting that if we have a restriction on the form of instance
declarations that ensures decidability of type checking, then generalizing
the form of instance declarations from
(conclusion, .., conclusion) => instance conclusion
to
(ctx => conclusion, .. , ctx => conclusion) => instance conclusion
doesn't break decidability, as long as
1) the instance would still be syntactically valid if we replaced all the
implications in the context with their right hand side
2) all the implications in the context also satisfy the syntatctic
validity rule.
Unfortunately the only restriction I know of (the one from the Haskell
Report) isn't very useful even with generalized contexts. On the other
hand, allowing regular derivation widens the space of safe instances, but
doesn't give any guidance toward restrictions that ensure safety.

Allowing implications in contexts even allows us to derive instances for
some irregular types:

data Twice f x = T (f (f x))
data Growing f = G (f (Growing (Twice f)))
data Id x = Id x

Suppose we want to define instances that will imply Show (Growing Id).
Growing Id is an irregular type so allowing irregular derivations isn't
enough, but the following instances are acceptable

instance (forall a.Show a => Show f a,Show x) => Show (Twice f x) where
  show (T ffx) = show "T "++show ffx
instance (forall a.Sh

Re: Type class problem

2003-08-17 Thread Brandon Michael Moore


On Sun, 17 Aug 2003 [EMAIL PROTECTED] wrote:

>
> > I defined type recursion and naturals as
>
> > >newtype Mu f = In {unIn :: f (Mu f)}
> > >data N f = S f | Z
> > >type Nat = Mu N
>
> > An application of Mu should be showable if the functor maps showable types
> > to showable types, so the most natural way to define the instance seemed
> > to be
>
> > instance (Show a => Show (f a)) => Show (Mu f) where
> >   show (In x) = show x
>
> > Of course that constraint didn't work.
>
> Well, it can if we write it in a bit different way:
>
> instance (Show (f (Mu f))) => Show (Mu f) where
>show (In x) = show x
>
> instance Show (N (Mu N)) where
>show Z = "Z"
>show (S k) = "S "++show k
>
> *Main> show (In (S (In (S (In Z)
> "S S Z"
>
> This solution is akin to that of breaking the type recursion when
> defining the fixpoint combinator fix. Only here we face the recursion
> on constraints. I believe the same solution should work for the
> Observable class. You didn't post the definition of the Observable
> class, so I couldn't test my claim.
>
> Flags used:
>   ghci -fglasgow-exts -fallow-undecidable-instances  /tmp/a.hs

Thanks for this solution.

You can get HOOD from the libraries page on haskell.org. It (Observe.lhs)
defines observable.

I still want the instance (Show a) => Show (N a) for showing all the
intermediate values that you get with the recursion combinators, so I
think I'll need to add -fallow-overlapping-instances.

I still think it would be useful to add a goal as an axiom while trying to
prove the anteceedents of any derivation rule that applies. Equivalently,
you could consider it accepting regular derivations rather than just
finite derivations.

I think allowing regular derivations might make it possible to find more
liberal constraints on the form of instances that would still insure the
decidability of solving for instances.

If types the form of types are restricted to explicit recursion, varients,
and tuples:

T := mu X.T |  | (T1,T2,..,Tn)

Then deriving an instance is decidable so long as the context of an
instance mentions only subexpressions of the head. (because there are only
a finite number of distinct subexpressions, and each time we use a rule we
add one to our set of axioms)

Of course it breaks down if you allow type operators...

Are there any papers I should read if I want to find something useful in
this vein? I just finished grabbing everything relevant in the first three
pages or so of googling for "type classes". The only book on type theory I
have is Pierce's "Types and Programming Languages", and I have nothing on
logic. I have a vauge idea coinduction might be useful, and an even hazier
idea that we might be able to get away with some non-regular derivation
trees.

Interesting? Useful? Should go to haskell-cafe?

Thanks for any advice

Brandon


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Type class problem

2003-08-14 Thread Brandon Michael Moore
To try some of the examples from paper "Recursion Schemes from Comonads",
I wanted to define instances of Show and Observable for types defined as
the fixed point of a functor.

I defined type recursion and naturals as

>newtype Mu f = In {unIn :: f (Mu f)}
>data N f = S f | Z
>type Nat = Mu N

An application of Mu should be showable if the functor maps showable types
to showable types, so the most natural way to define the instance seemed
to be

instance (Show a => Show (f a)) => Show (Mu f) where
  show (In x) = show x

Of course that constraint didn't work, so I made a class PShow with a
method that witnessed the constraint

>class PShow f where
>  pshow :: (Show a) => f a -> String

Now I could define a generic show instance for Mu, and got a Show instance
for Nat by defing a PShow instance for N

>instance (PShow f) => Show (Mu f) where
>  show (In x) = pshow x

>instance PShow N where
>  pshow Z = "Z"
>  pshow (S k) = "S "+show K

show (In (S (In (S (In Z) -> "S S Z"

This works fine, but you need to be able to use the method of P in
the definition of  (Mu f). I couldn't figure out how to do the same
thing with Observable (the use of the class methods is a few layers away
from the public interface).

I tried to define a set of mutaully recursive definitions

instance (Observable (f (Mu f))) => Observable (Mu f) where
  observer (In x) = send "In" (return In << x)
instance (Observable a) => Observable (N a) where
  observer Z = send "Z" (return Z)
  observer (S x) = send "S" (return S << x)

unfortunately, the class constraint solver doesn't doesn't like this. I
get an error message like

Constraint Stack Overflow:
  Observable (N (Mu N))
  Observable (Mu N)
  Observable (N (Mu N))
  Observable (Mu N)
  ...

I expected that that constraint solver would realize it could construct a
dictionary transformer with a type like Observer Nat -> Observer Nat and
take the fixed point to get an instance for Nat. I haven't looked at the
implementation, but it seems like it would be easy to add the constraint
we are trying to derive to the list of assumptions when trying to
construct all the anteceedents of an instance declaration.

Can anyone tell me how to
1) get around this
2) modify GHC to handle this :)

Brandon



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Calling Haskell from Java

2003-08-14 Thread Brandon Michael Moore
Is it fine if the interface uses JNI? The jvm-bridge is an excellent tool
if you can use JNI, but I don't know of anything that compiles Haskell to
java bytecode. There was a post a few years ago about an experimental Java
backend for GHC, but I haven't heard anything since, and the -J switch
doesn't do anything in a recent GHC. The Mandarin people had a version of
GHC back when they were targeting Java, but they've moved to .NET. Does
anyone know of a project (or a CVS tag) for something that can compile
Haskell to java bytecode?

The jvm-bridge project includes tools for generating a Haskell interface
to a java class, another for generating a monad that wraps the JVM
initialization your program needs, using typeclasses to model the class
hierearch and convert parameters. There is a function that will
dynamically define a class with Haskell methods.

I don't know how much support jvm-bridge provides if you want to define a
class in Haskell and package it so you can use it from a normal java
program. You would need to declare a class in java with native methods,
and compile the haskell into a suitable library providing the native
implementation. I don't remember any tools for doing this.

I might have simply forgotten or overlooked a nice interface, or you might
need to write the JNI code. If you are determined to go this way you could
at least use the JNI binding JVM provides. Rather than doing that, it's
probably simpler for your program to start in Haskell, even if all
it does is define a class and invoke your main class (passing a factory
object).

I assume your haskell with need to call java at some point, if only to
unpack a collection, so I'll pass along two things that confused me for a
while. One thing to remember is that methods are defined in the class
module for the first class that defined them. If you want to use a method
that a class inherited from an ancestor you need to create and import the
class module for that ancestor. The method will have a name like
method_ancestor_args, but it calls the correct overridden method on
whatever object you pass in a this (first argument). The other thing (this
is more a Haskell issue) is that if you are writing a GUI program the main
(Haskell) thread has to survive until the program is supposed to end, and
it needs to be inside the "let java threads run" combinator. (sorry, I
forgot the name and I'm away from my home machine).

What are you trying to do? I'm thinking about porting a web testing
application from python to haskell for parser combinators and monads (I'm
using objects with a run method to control execution and thread through
some state), but I don't know of any alternative to the HttpUnit library
for testing webpages with javascript. I just need to call a bit of java in
the middle of a Haskell program

Tell us how your project works out.

Brandon

On 12 Aug 2003, Immanuel Litzroth wrote:

> Calling Haskell from java was supposed to be supported by a tool
> called lambada, but all I can seen to find of that on the net is a
> paper. Does anyone have any pointers to more information/implementation.
> I specifically want to call Java->Haskell and not the other way around.
> thanks in advance
> Immanuel
> ***
> It makes me uncomfortable to see
> An English spinster of the middle class
> Describe the amorous effects of `brass',
> Reveal so frankly and with such sobriety
> The economic basis of society.
> W.H. Auden
>
> --
> Immanuel Litzroth
> Software Development Engineer
> Enfocus Software
> Kleindokkaai 3-5
> B-9000 Gent
> Belgium
> Voice: +32 9 269 23 90
> Fax : +32 9 269 16 91
> Email: [EMAIL PROTECTED]
> web : www.enfocus.be
> ***
>
> ___
> Haskell mailing list
> [EMAIL PROTECTED]
> http://www.haskell.org/mailman/listinfo/haskell
>
>


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: Help with Exceptions on I/O

2003-08-14 Thread Brandon Michael Moore
You don't really need to change the buffering mode. stdout is line
buffered by default, so you just need to make sure a newline is printed
after your message. putStrLn adds a newline after the string it prints, or
you could use \n in the string literal. Try this:

 main = do
  --lots of code goes here,
  --with a catch handler if you want it
  putStrLn "Press ENTER to exit" -- with Ln
  getLine
  return ()

On Tue, 12 Aug 2003, Hal Daume wrote:

> you can write this a bit more simply as:
>
> main = do
>   (do do-the-major stuff here
>   putStr "File created...")
> `catch` (\_ -> show the error)
>   getLine-- look ma, no <-
>   return ()
>
> now, your problem is almost certainly with buffering.  in the main do,
> put
>
>   hSetBuffering stdout NoBuffering
>   hSetBuffering stdin  NoBuffering
>
> you'll need to import System.IO to get these.
>
>  --
>  Hal Daume III   | [EMAIL PROTECTED]
>  "Arrest this man, he talks in maths."   | www.isi.edu/~hdaume
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Alexandre Weffort
> Thenorio
> Sent: Tuesday, August 12, 2003 4:17 PM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: Help with Exceptions on I/O
>
>
> I have a program which creates textfiles out of other files. Since the
> program is runned from windows I output some text strings (Like "File
> created succefully") and I need to stop the program before it quits so
> that
> the user can read the line outputted to know what went on and then he
> can
> press ENTER to quit the program.
>
> I managed to do this fine if no error occurs but when a error occurs I
> am
> having problems.
>
> The code goes like that
>
> main :: IO()
> main =catch (do
>  Do all the needed stuff here
>  putStr "File created Successfully. Press RETURN to
> quit"
>  dummy <- getLine --Halts the program so the user
> can
> read the above line)
>  putStr "Exiting now..." --needed since I can't
> finish a
> do function with the "dummy<- getLine" line) (\_ -> do putStr "\nFile
> not
> found. Press RETURN (ENTER) to quit."
>  dumb <- getLine
>  putStr "\nExiting...")
>
> So when the program runs, if the input file is found the program writes
> successfull creation of file but if the file doesn't exist, after the
> user
> gives the input filename and press enter, the program creates a new line
> and
> Halts (Probably because of the getLine function) without writing out
> anything, then when the user press ENTER again it writes the line at the
> first putStr (File not...), then writes the next putStr line under it
> (Exiting...) and exits. I don't know why it doesn't wirte the first
> line,
> halts and then when user press enter it writes the second and quits.
>
> Can anybody help me as I am not very familiar with exception and
> catches.
>
>
> Another question I have is: Is there any other function rather than
> getLine
> that halts a program and continues when a user press any key (Instead of
> ENTER) and besides this is an ugly code since getLine wasn't made for
> that
> but I couldn't find anything else myself.
>
> Thank you in advance.
>
> Best Regards
>
> Alex
> ___
> Haskell mailing list
> [EMAIL PROTECTED]
> http://www.haskell.org/mailman/listinfo/haskell
>
>
>


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell