Re: [Haskell-cafe] Dynamic thread management?

2007-08-22 Thread Brandon Michael Moore
On Wed, Aug 22, 2007 at 04:07:22AM +0100, Hugh Perkins wrote:
 On 8/21/07, Andrew Coppin [EMAIL PROTECTED] wrote:
  I highly doubt that automatic threading will happen any time this decade
  - but I just learned something worth while from reading this email. ;-)
 
 That's an interesting observation.  I cant say I dont believe it, but
 I'm interested to know why (but it could be just a feeling, or an
 observation in time-to-market lead times?).  Are you saying this
 because multicores arent sufficiently widespread or powerful enough
 yet (4-cores doesnt really even make up for the overhead of using
 automatic threading, at least in initial implementations)? or are you
 saying this because you think the technical implementations are not
 sufficiently advanced?

Automatic threading is inherently limited by data dependencies.
You can't run a function that branches on an argument in parallel
with the computation producing that argument. Even with arbitrarily
many processors and no communication overhead there is a limit to
how much parallelism you can squeeze from any given program.

You should read
Feedback Directed Implicit Parallelism
http://research.microsoft.com/~tharris/papers/2007-fdip.pdf
and perhaps
Limits to Implicit Parallelism in Functional Applications
http://www.detreville.org/papers/Limits.pdf

In short, with zero overhead and an oracle for scheduling you can
get a factor of at most 2x to 32x by implicitly parallelizing
existing Haskell code. In practice, with execution overhead it's a
gain of perhaps 10% to 80% on a 4-core system. The experiments in
the first paper are based on a fairly sophisticated implementation
that reduces overhead by using profiling results at compile time
to decide which thunks might be worth evaluating in parallel. For a
fixed benchmark there's probably not much lost by using canned
profiling results instead of adapting at runtime, and in any case
the hard bounds from data dependencies still apply.

You can do a lot better if you expect people to rewrite code,
but automatic threading suggests something completely effortless.
I think you can get much better results if you work on the programming
style in connection with a better runtime.  You can think of data parallel
Haskell as a new programming style with more implicit parallelims,
and the runtime support to exploit it.
 
 I kindof think automatic threading is like 3d graphics: as soon as the
 hardware became sufficiently powerful, 3d graphics became trivial.
 Enough money was thrown at the problem in a very short time by a few
 powerful companies that it was a non-issue.

If you have cores to waste, you might try rewrites like

f x 
=
case x of
  C1 a1 a2 - f (C1 a1 a2)
  C2 b - f (C2 b)
  C3 - f C3

and then speculatively execute several of the case branches.
If you don't throw away too much type information you should
even be able to do it at runtime.

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Syntax for lambda case proposal could be \of

2007-08-15 Thread Brandon Michael Moore
On Wed, Aug 15, 2007 at 11:06:36AM -0700, Stefan O'Rear wrote:
 On Wed, Aug 15, 2007 at 06:58:40PM +0100, Duncan Coutts wrote:
  On Wed, 2007-08-15 at 10:50 -0700, Stefan O'Rear wrote:
  
   OTOH, your proposal provides (IMO) much more natural syntax for
   multi-pattern anonymous functions, especially if we stipulate that
   unlike a case (but like a lambda) you can have multiple arguments; then
   you could write stuff like:
   
   sumTo0 = foldr (\of 0 k - 0
   n k - n + k) 0
   
   Anyone else like this?
  
  Why not just:
  
  sumTo0 = foldr (\0 k - 0
   n k - n + k) 0
 
 Because it would break a very large amount of old code, and I think H'
 was supposed to be upward compatible:
 
 foo = getSomethingCPS $ \ arg -
   moreStuff
 
 is now a syntax error (\ { varid - } matches no productions).

I was going to say you could leave the other production, but layout
is separated from parsing. It's not even simple to say use nondeterminism,
because of that rule about a layout level ending if there would otherwise
be a parse error. If that parse error rule backtracks far enough, old lambdas
could simply have the production  \{} pat - expr, if it's only one token
of lookahed perhaps \ { pat } - expr would be mostly backwards-compatible
(but it would be very wierd to understand the errors when it wasn't).

Lambdas with multiple arguments and pattern matching sound really nice, but
case of and \of are both extremely ugly names. They only make sense if
you are familiar with the rest of the language and think of this as
shoehorning in some new kind of lambda with pattern matching (remember
 constructor classes, and how it's all just typeclasses now). What do the rest
of you on the cafe think?

If a different name is necessary, I'd prefer something like fun that just
tries to imply this is a slighly heavier, more powerful kind of anonymous 
function.
(plus it's a step towards rec-expressions)

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: towards a new foundation for set theory with atoms

2007-08-11 Thread Brandon Michael Moore
On Fri, Aug 10, 2007 at 03:54:23PM -0700, Greg Meredith wrote:
 Haskellians,
 
 A quick follow up. If you look at the code that i have written there is a
 great deal of repeated structure. Each of these different kinds of sets and
 atoms are isomorphic copies of each other. Because, however, of the
 alternation discipline, i could see no way to abstract the structure and
 simplify the code. Perhaps someone better versed in the Haskellian mysteries
 could enlighten me.

You could take a less absolute view of the game, and describe each node
instead locally from the perspective of a player. Imagine Alice Bob and
Carol sitting around a table:

data ThreePlayers a b c =
   Next (ThreePlayer b c a)
 | Prev (ThreePlayers c a b)

In general you can get subgroups of a symmetric group as your sets of
colors this way (i.e, the set of elements of any group), I'm not quite
sure how much freedom you have in the sets of allowed transitions
(in particular, making some of the argument types identical can break
symmetry).

You could also go for the obvious big hammer, pretend that Haskell has
a strongly normalizing subset and encode inequality explicitly with
GADTs and such.

date Eq a b where Refl a a
data False
type Neq a b = Eq a b - False
-- might be trouble if a and b are only equal non-constructively?

data Red = Red
data Green = Green


data Set color where
  Red :: Neq Red color - Set Red - Set color
  ...

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] a regressive view of support for imperative programming in Haskell

2007-08-09 Thread Brandon Michael Moore


On Thu, Aug 09, 2007 at 11:52:17AM -0700, David Roundy wrote:
 On Thu, Aug 09, 2007 at 02:08:20PM +0100, Jules Bean wrote:

*snip*

  A third example is with nested dos:
  
  do x - bar y
 baz
 something $ do foo x
  
  is not the same as
  
  do baz
 something $ do foo (- bar y)
 
 Again, it all comes down to whether the find the nearest do is obvious.
 It seems pretty obvious to me.  And I like the idea of someone just
 implementing this, and then those of us to whom it appeals can try it.
 I've longed for something like this (mostly for monadic ifs and cases) for
 quite a while now...

Funny, I've been longing for the monadic case (and if) for quite a while.
A mondic case is simple, it's handy, and you don't have to worry about
lots of interactions

caseM e of alts == e = \x - case x of alts

I'm convinced this would be plenty useful on its own, and also that
trying to design any more comprehensive syntax quickly gets really
tricky.

The basic problem seems to be that functions can expect either monadic
or pure arguments, and return pure or monadic values, so there are at
least three possible conversion you might want at each application
(considering pure-pure and monadic-monadic the same). Defaulting
to make things work requires type information, and doesn't seem
nearly so simple if you consider that programmers might actually want
to pass around actions of the monad they are running in as values
(Setting GUI callbacks, using [] for String processing, etc).

Actually, deciding which tranformation gets juxtaposition and how to
recurse into subterms seems to give a design space that might have
reasonable solutions. More on that in a latter message.

  There is also the fact that if :
  foo x = bar x x
  
  then you call foo monadically as in
  
  do foo (- baz)
  
  You can no longer replace foo with its definition, because if replace 
  that with
  
  do bar (- baz) (- baz)
  
  ...that means something rather different :(
 
 Again, this seems obvious, and it doesn't seem like replace foo with its
 definition is something I think of.

One of the great things about haskell is how completely naive
you can be when you replace foo with its definition, and still do
valid equational reasoning.

It would be sad if substituting a parenthesized
subterm of something that looked like an expression wasn't valid.
(expanding a definition can change sharing, but at least it's denotationally
equivalent).  The only slightly tricky things now are remembering
that x - exp does not define x to be exp, and what to expand a class method
to. I think I'd be happier if there was some bracketing around the
expression to be transformed, to warn you to again be cautious and fearful
about transforming your code.

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: monad subexpressions

2007-08-03 Thread Brandon Michael Moore
much snipping

 Also, note, if you use the operators in Control.Applicative, then:
 
   return $ foo $(bar1) $(bar2) $(bar3) ...
 
 can be:
 
   return foo * bar1 * bar2 * bar3 ...
 
 or:
 
   foo $ bar1 * bar2 * bar3
 
 I don't (personally) see how that's any more cryptic than placing brackets 
 around around the monadic values themselves. 
 ...

Seconded. The main difference with brackes is that the application to pure
values looks the same as normal application.

 
 To get outside the scope of idiom brackets/applicative, you'd need a use case 
 like:
 
   if $(mexpr) then branch1 else branch2
 
 or (lest that be to easy):
 
   case $(mexpr) of
   p1 - branch1
   p2 - branch2
   ...
 
 In other words, something where you're not simply applying a pure function to 
 a bunch of monadic arguments. I can't say I've run into such patterns much 
 myself, but I've been told they're common in xmonad, and may be elsewhere.

General purpose brackets are overkill here. I would really like a simple
monadic case. What's so bad about

caseM mexpr of
  p1 - branch1
  p2 - branch2



(mexpr = \e - case e of
  p1 - branch1
  p2 - branch2)

It's simple sugar for working with monadic code, much like do notation.
(indeed, it seems to plug a gap - we have do for sequencing, liftM and
so on for application, but no sugar for case discrimination)

It's a much simpler sort of thing than this fancy sugar for intermixing
code in various monads people have been talking about (so far it seems
assumed that one is just Identity...)

Brandon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re : [Haskell-cafe] Indentation woes

2007-07-27 Thread Brandon Michael Moore
On Thu, Jul 26, 2007 at 05:34:32PM -0400, anon wrote:
 2007/7/26, Stefan O'Rear [EMAIL PROTECTED]:
 As for why, it's just a matter of Haskell Committee taste.  Nothing
 too deep, just an arbitrary set of rules.
 That's not much of an explanation, is it? I imagine someone must have
 given the matter some thought before describing the layout rule in
 great details in the language report. Perhaps there was a perfectly
 good reason to preclude this kind of code, but I'm afraid I do need a
 reason if I am to understand why.

Part of the reason is that the layout rule is supposed to be somewhat
independent of the rest of the grammar. It's described as a simple
preprocessing state that adds block delimiters { ; } just recongizing
a few keywords that open blocks, and otherwise looking at the indentation
of the first non-whitespace character on lines.

You can allow the syntaxes where something is no less indented than it's
containing block by allowing some optional semicolons in the grammar.
GHC keeps it's parser in compiler/parser/Parser.y.pp It's a Happy
grammar file, it shouldn't be hard to make your change and see how
you like it. I think the gdrh nonterminal is the one you want to change,
add another production that allows ';' '|' quals '=' exp

Have fun

Brandon Moore
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Minim interpreter

2007-07-20 Thread Brandon Michael Moore
On Fri, Jul 20, 2007 at 10:10:58PM +0200, Hugh Perkins wrote:
 Newbie question: why does the following give Not in scope 'c' for the last
 line?

I assume you meant
 
 string :: Parsec.Parser String
 string = do c - Parsec.letter
 do cs - string
return c:cs
 Parsec.| return [c]

Without adding that indentation, the second do cuts of the first block
and you get a rather different error.

The problem here is that the line beginning Parsec.| is lined up
with the first token after do, so layout adds a semicolon in front
of it, but a statement can't begin with an operator, so to avoid that
parse error the layout rules add the close brace and end the do block.
It parses like this:

string = ( do { c - Parsec.letter
  ; cs - string
  ; return c:cs
  } )
Parsec.| (return [c]

The parse error rule is there so a do block will be closed by the end of
surrounding parens or braces, maybe it has other uses.

In any case, you really ought to use many1.

 string = Parsec.many1 Parsec.letter

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Parsing and Coding

2007-07-07 Thread Brandon Michael Moore
On Sat, Jul 07, 2007 at 06:49:25PM +0100, Andrew Coppin wrote:
 Dave Bayer wrote:
 I was beginning to accept that I might die before clearing my pipeline 
 of research projects I want to code up.
 
 ...so it's *not* just me!
 
 Haskell has given me new hope.
 
 Indeed. ;-)
 
 
 Today I hve implemented encoders and decoders for RLE, MTF, Fibonacci 
 codes, and LZW. Next on my list is BWT, Huffman codes and arithmetic 
 coding. (That last is *very* hard though...)

You should look at Jeremy Gibbons' paper Arithmetic coding with folds and 
unfolds. www.comlab.ox.ac.uk/oucl/work/jeremy.gibbons/publications/arith.pdf

For your more general parsing troubles, have you considered making you lower
level parsers copy the rest of the underlying input stream into each token
they produce? After that transformation a simple string might look something
like [(c,rest) | (c:rest) - init (tails Some characters of input)].

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Class Interfaces in OOHaskell?

2007-07-06 Thread Brandon Michael Moore
On Fri, Jul 06, 2007 at 06:11:42PM -0400, Scott West wrote:
 I conquered the below problem, but now I have another question:
 
 How can one have two interface-classes that reference each other? For 
 example,
 
 type Inter1 = Record (
  MkFoo :=: Inter2 - IO ()
  :*: HNil )
 
 type Inter2 = Record (
  MkBar :: Inter1 - IO ()
  :*: HNil )
 
 Obviously this is cyclical, but is there a nice way to get around it?
 I think I could wrap them up in a datatype (ie, data InterOne =
 InterOne Inter1, and modify definitions accordingly) but are there any
 alternative methods?

In Haskell type synonyms can't be recursive. You can use a newtype - the
wrapper has no runtime cost, it's just an instruction to the typechecker,
much like the explicit rolling and unrolling in an isorecursive treatment
of recursive types.

I think it's about as easy to build a typesystem with recursive types,
the problem is that a lot of bogus programs typecheck. See
http://www.haskell.org/pipermail/haskell-cafe/2006-December/020074.html

Matthias Blume et. al. have made a language called MLPolyR with
extensible records and variants (with case expressions and exception
handling built from them), and report that it works pretty well to
infer infinite types only if they recurse through a record or variant
type.

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Best idiom for avoiding Defaulting warnings with ghc -Wall -Werror ??

2007-06-25 Thread Brandon Michael Moore
On Mon, Jun 25, 2007 at 08:53:18AM -0700, Dave Bayer wrote:
 It continues to appear to me that ghc -Wall -Werror doesn't support  
 small Int constants without a per-use penalty, measured in code length.

Why not use ghc -Wall -Werror -fno-warn-defaulting, maybe with
default(Int)? It removes the potential problems that justified
coding the warning, and turns off the warning.

By the way, using Integer for exponents really shouldn't be less efficient - 
but it seems it is.

The code for (^) should be something like this:

{-# INLINE ^ #-}
n ^ m = case toInteger m of
  S# i - powerInt# n i
  J# a p - powerGmp n a p

(With powerInt# and powerGmp specialized on various types
of n, when there is something to gain).

Then the standard optimizations (inlining, static instance
selection, more inlining, and case of constructor)
should turn n^3 into the same code whether 3 is Int or Integer.

Perhaps GHC.Real needs to be sprinkled with more pragmas.

 Am I the only one blessed/cursed with a vision of how proponents of  
 practical languages would have a field day with this? Perhaps I'm  
 reading too many blogs.

Seeing as it only happens if you specifically ask the compiler
to be as annoying as possible, no reasonable person should take
this much farther than complaining about the GHC warning options.

After all, the type system and purity we claim are generally good
things are still around whatever options you pass, and none of the
justifications for them have much to say one way or the other on
this sort of compiler warning.

I think nobody will argue if you suggest GHC shouldn't complain
about defaulting if only one of the candidate types is actually
usable. It's rather like typeclasses with -fallow-overlapping-instaces.

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell serialisation, was: To yi or not to yi...

2007-06-21 Thread Brandon Michael Moore
On Thu, Jun 21, 2007 at 04:37:20PM +0200, Tom Schrijvers wrote:
 That wouldn't make a difference. If, from the pure Haskell point of view 
 we can't tell the difference between two expressions that denote the same 
 function, then operations in the IO monad should not be able to do so 
 either. Otherviews a whole lot of program transformations based on 
 rewriting of expressions would be invalid. How do you account for that?

In the same way we don't mind if const getChar can tell a difference
between the value () (and itself). Also think of imprecise exceptions -
really the pure code hitting the exception does all the usual stuff with
stack walking, but semantically the pure expression evaluates to a set
of exception, catch makes a nondeterministic choice of which one to handle,
and it just happens that catch always picks the one that you run into first
given the evaluation order that happend in this particular execution.

I think it would be safe to treat serialize the same way.

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What puts False before True?

2007-06-06 Thread Brandon Michael Moore
On Wed, Jun 06, 2007 at 02:50:12AM +0100, PR Stanley wrote:
 
 PR Stanley wrote:
 What do the ??? symbols represent?
 
 I see you are still stuck in ISO-8859-1 and 
 deprived of international characters and 
 symbols. (And this reply in ISO-8859-1 too 
 accordingly; normally I use UTF-8.) Unicode and UTF-8 FTW! :)

 oh very good, very good but forgive me, how is 
 that supposed to answer my question?
 Paul 

My mail reader also seems to lack proper unicode support.
I could figure out what the symbols were (and verify the
message arrived uncorrupted in UTF-8) by saving the message
and using another program. Perhaps something similar would
work for you. 

The unicode-bearing paragraph of Scott Bricker's message reads

It's natural to define a partial order among sets from inclustion:
A less-than or equal to B if and only if A subset of or equal to B.
Viewing sets as predicates, the corresponding relationship between
predicates is implication. A subset of or equal to B if and only if
x element of A implies x element of B - so predicates are naturally
ordered by implication. Viewed as sets, the predicate that always
returns False is equivalent to empty set - the empty set, while
the predicate that always returns True is equivalent to U
- the universal set that contains everything (in naive set theory,
anyway - in axiomatic theories it gets a little complicated).

I replaced each unicode character with its name. I think the
result reads pretty well. There ought to be programs for this,
can anyone suggest Haskell libraries that would make one
easy to write?

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads and constraint satisfaction problems (CSP)

2007-05-31 Thread Brandon Michael Moore
On Thu, May 31, 2007 at 10:42:57AM -0700, Greg Meredith wrote:
 All,
 
 All this talk about Mathematica and a reference to monadic treatments of
 backtracking reminded me that a year ago i was involved in work on a
 Mathematica-like widget. At the time i noticed that a good deal of the
 structure underlying LP, SAT and other solvers was terribly reminiscent of
 comprehension-style monadic structure. i think i asked Erik Meijer if he
 knew of any work done on this and posted to LtU, but nobody seemed to have
 understood what i was mumbling about. So, let me try here: does anybody know
 of references for a monadic treatment of constraint satisfaction?

It's not particularly monadic, but you might check out 
Modular Lazy Search for Constraint Satisfaction Problems
http://cse.ogi.edu/PacSoft/publications/.../modular_lazy_search.pdf
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What puts False before True?

2007-05-31 Thread Brandon Michael Moore
On Thu, May 31, 2007 at 10:03:05AM +0100, PR Stanley wrote:
 
  What is the basic philosophy for Bool being a member of Ord?

I hear two questions, why is Bool a member of Ord at all, and
why was it ordered with False before True.

If I'm reading the reports correctly, the Ord instance was
actually added in the Haskell 1.1 standard, by changing the
definition of Bool from 

data Bool = True | False

to

data Bool = True | False deriving (Eq, Ord, Ix, Enum, Text, Binary)

(Text was later broken into Read and Show)

I imagine it was added because you generally can't derive an
instance of a class for a new type unless all the types you
mention in the definition already have instances. Also, there
are Ord instances like (Ord a, Ord b) = Ord (a,b), and
it's sometimes handy to be able to use types like (String, Bool)
as keys in a Map, or keep them in a Set. Probably there are
other useful things you can do.

  What justifies False  True?
 in most interpretations this equals:
 
 False == 0
 True == 1

Indeed, it's the same in C but what justifies the decision in Haskell?

It seems like we actually get that order because deriving Ord makes
constructors earlier in the definition compare greater than later
constructors, and nobody was bothered by that ordering. I can't
see how one of the orders could be more expressive or otherwise
technically better than the other, so I suppose you might as well
agree with other conventions. I think it's mathematical convention
more than the C convention Haskell is agreeing with.

But this is all just speculation - the members of the Haskell comittee could
tell us why this all actually happened, and at least a few read this list.

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] where do I point the type annotations

2007-05-18 Thread Brandon Michael Moore
On Fri, May 18, 2007 at 02:39:48AM -0400, Alex Jacobson wrote:
 I am playing with using SYB to make generic indexed collections.  The 
 current code is this:
 
data Syb = Syb [Dynamic] -- list of [Map val (Set a)] 
 
empty item = Syb  $ gmapQ (toDyn . emp item) item
where
emp::x-y-Map.Map y (Set.Set x)
emp x y = Map.empty
 
insert x (Syb indices) = Syb $ zipWith f indices (gmapQ toDyn x)
where
f dynIndex dynAttr = toDyn $ Map.insert attr 
 (maybe (Set.singleton x) (Set.insert x) $
Map.lookup attr index) index
where
index = fromJust $ fromDynamic dynIndex
attr = fromJust $ fromDynamic dynAttr
 
e = empty i where i=i::Test
t1 = Test foo 2
c1 = insert t1 e


The problem is that Dynamic just remembers that whatever type of value it holds,
that type is Typeable. In particular it doesn't try to keep track of whether
there is an Ord. If you are polymorphically processing the contents of
a Dynamic you can't assume much - Dynamic could have been defined

data Dynamic = forall t . (Typeable t) = Dynamic t

You can make something like your code using a more informative existential:

 data Ix a = forall key . (Typeable key, Ord key) = Ix (Map key (Set a))

(I'm adding the parameter a because it looks like your code would just
explode if you tried to add values of several different types to a Syb).

 data Syb a = Syb [Ix a]

 insertIndex k v index =
  Map.insertWith Set.union k (Set.singleton v) index

insert indexes the argument by the subterms the index actually cares about. 

 insert :: (Data a, Ord a) = a - Syb a - Syb a
 insert x (Syb indices) = Syb $ update indices (gmapQ toDyn x)
 where
   update [] _ = []
   update (Ix index:is) dyns =
  let (d: dyns') = dropWhile (\d - dynTypeRep d /= keyType) dyns
key = fromJust $ fromDynamic d
keyType = typeOf ((undefined :: Map key (Set a) - key) index)
  in (Ix (insertIndex key x index): update is dyns')

 data Test = Test String Int
   deriving (Data,Typeable,Eq,Ord)

Unfortunately, you can't automatically build an empty index.
gmapQ toDyn is great for getting subterms, but not checking
if they are Ord.

 e = Syb [Ix (Map.empty :: Map String (Set Test)),
   Ix (Map.empty :: Map Int (Set Test))]

At least the test works.

 t1 = Test foo 2
 c1 = insert t1 e

If it's enough to support types where every subterm is Ord, you could probably
automte building the empty index with the strategy from
Scrap Your Boilerplate with Class

Brandon

P.S.

The existential definition of Dynamic suggests there could be

withDynamic :: (forall t . (Typeable t) = t - a) - Dynamic - a

it takes an awful lot of black magic to define it for GHC, though.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] QuickCheck invariants for AST transformations

2007-05-08 Thread Brandon Michael Moore
On Tue, May 08, 2007 at 10:06:32AM +0100, Joel Reymont wrote:
 I'm looking for suggestions on how to create invariants for the  
 following AST transformation code. Any suggestions are appreciated!
 
 I asked this question before and Lennart suggested abstract  
 interpretation as a solution. This would require interpreters for  
 both ASTs to determine that the result they achieve is the same. I  
 don't fancy writing a C# interpreter, though, so I'm looking for an  
 easier way out.
 
   Thanks, Joel

You can't claim a translation preserves meaning, if you don't say anything
about what C# means. But, you can use a description somebody else already
wrote, like the microsoft implementation, or maybe there are formalizations
floating around for some theorem provers.

For partial specification, checking that a tranformation preserves types
is good.  If you're translating between languages you can at least define
another translation between types, and check that the type of the
translation of an expression is the translation of the expressions types.
You still need a model of the C# type system, but you shouldn't need to
trust the model if you generate code with type annotations, and any
missmatches will be caught.

You might avoid specifying the meaning of C# directly by instead assuming
that certain pairs of expression and translation have the same meaning,
whatever that is, and then use some other rules about when two expression
in the same langauge are equivalent to stretch your primitive assumptions
about translations to the correctness of your whole translation. How you
show those rules are correct, I don't know.

Brandon.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Indenting with PPrint

2007-05-08 Thread Brandon Michael Moore
On Tue, May 08, 2007 at 07:39:15AM +0100, Joel Reymont wrote:
 
 If you happen to be formatting C I've also worked out how to get  
 nice argument
 lists out of both pretty printers.
 
 I'm formatting C# but I'll certainly take your arg list tips.

These narrow like

foo (a, b)
foo (a,
 b)
foo
  (a, b)
foo
  (a,
   b)

as space gets tight.

PPrint:

name  nest 2 softline  parens (align (sep (punctuate comma args)))

Text.PrettyPrint.HughesPJ:

sep [name, lparen]  sep (punctuate comma args)  rparen

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Indenting with PPrint

2007-05-07 Thread Brandon Michael Moore
On Mon, May 07, 2007 at 10:38:19PM +0100, Joel Reymont wrote:
 Folks,
 
 Are you using UU.PPrint [1]? Can you tell me how to print stuff like  
 this?
 
 {
  blah
  blah
 }
 
 I tried the following which sort of works but doesn't return the  
 closing brace to the indentation level of the first one.
 
 braces x = PP.braces $ linebreak  indent 4 x

You could use

braces x = PP.braces $ indent 4 (linebreak  x)  linebreak


The tricky thing with PPrint is remembering that the current indentation
level is only applied when you hit a line, so nest doesn't affect a leading 
word.
nest n (text a  line  doc) = text a  nest (line  doc)

If you want to give the pretty printer some freedom in laying out your
document, and you care about how the alternate layouts look (and why
else would you be using a pretty printer), you might prefer

braces x = group (PP. braces (nest 4 (linebreak  x)  linesbreak)

That will make a one-line form like

{blah, blah}

rather than

{blah, blah}

and it lets the printer flatten a brace group whenever it fits on one line,
not just if it happens to be inside an enclosing group.

The trick for getting GNU braces with Text.PrettyPrint is
braces x = cat [lbrace, text , nest 4 x, rbrace]

without (text ) vertical composition would overlap the indented blah with
the opening brace.

If you happen to be formatting C I've also worked out how to get nice argument
lists out of both pretty printers.

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Why do we have stack overflows?

2007-05-03 Thread Brandon Michael Moore
On Thu, May 03, 2007 at 04:59:58PM -0700, John Meacham wrote:
 I believe it is because a stack cannot be garbage collected, and must be
 traversed as roots for every garbage collection. I don't think there are
 any issues with a huge stack per se, but it does not play nice with
 garbage collection so may hurt your performance and memory usage in
 unforeseen ways.

Isn't it just the top of the stack that has to be treadted as a root?
(maybe you need to walk the stack to find exception handlers and so on.)
Maybe it shouldn't be so much worse than a heap. The Chicken Scheme
system allocates everything on the C stack, and runs some sort of
compacting collector when it is about to fill.

Brandon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Release plans

2007-04-18 Thread Brandon Michael Moore
Sending to the right list this time, with some additions.

 Just to show what kind of problems we are currently facing. The  
 following type checks in our EHC compiler and in Hugs, but not in the  
 GHC:
 
 module Test where
 
 data T s = forall x. T (s - (x - s) - (x, s, Int))
 
 run :: (forall s . T s) - Int
 run ts  = case ts of
 T g - let (x,_, b) =  g x id
in b

Consider this additional code which also typechecks in Hugs:

v :: forall s . T s
v = T f

f :: s - ([s] - s) - ([s], s, Int)
f v g = let x = [v] in (x, g x, 0)

run v gives 0. Apparently id has type [s] - s. I'm not sure
if we're ending up with the infinite type s = [s], or maybe
something really bad would happen in run v, if it were not
for parametricity keeping us from examining the first two
elements of the triple too closely (I tried adding Show
constraints, but couldn't get code like that to typecheck).

Now for wild speculation:

Simplifing T and assuming full existentials, you might define
something akin to run with a type like
(forall s . exists x. (x - s) - (x, s))
 - (exists t . (t - t) - (t, t)),
which suggest to me two ingredients (in a language with type operators),
a functional axiom of choice plus fixpoints of type operators,
ac :: (forall a . exists b. X) - exists f::*-*. forall a . X[f(a)/b]
and fixT :: (* - *) - *

Then you might implement the type above as

equalize pkg =
  open ac pkg as {exists F. body}
  in close {fixT F, body}

Brandon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell-cafe] Re: Release plans

2007-04-18 Thread Brandon Michael Moore
On Tue, Apr 17, 2007 at 12:50:48PM +0200, Doaitse Swierstra wrote:
 Just to show what kind of problems we are currently facing. The  
 following type checks in our EHC compiler and in Hugs, but not in the  
 GHC:
 
 module Test where
 
 data T s = forall x. T (s - (x - s) - (x, s, Int))
 
 run :: (forall s . T s) - Int
 run ts  = case ts of
 T g - let (x,_, b) =  g x id
in b

Consider this additional code which also typechecks in Hugs:

v :: forall s . T s
v = T f

f :: s - ([s] - s) - ([s], s, Int)
f v g = let x = [v] in (x, g x, 0)

due to parametricity, run v can't depend on x or g x.
Apparently id has type [x] - x. Are EHC and Hugs supposed
to support equirecursive types?

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Release plans

2007-04-18 Thread Brandon Michael Moore
On Tue, Apr 17, 2007 at 11:39:03PM -0700, Brandon Michael Moore wrote:
 On Tue, Apr 17, 2007 at 12:50:48PM +0200, Doaitse Swierstra wrote:
  Just to show what kind of problems we are currently facing. The  
  following type checks in our EHC compiler and in Hugs, but not in the  
  GHC:
  
  module Test where
  
  data T s = forall x. T (s - (x - s) - (x, s, Int))
  
  run :: (forall s . T s) - Int
  run ts  = case ts of
  T g - let (x,_, b) =  g x id
 in b
 
 Consider this additional code which also typechecks in Hugs:
 
 v :: forall s . T s
 v = T f
 
 f :: s - ([s] - s) - ([s], s, Int)
 f v g = let x = [v] in (x, g x, 0)
 
 due to parametricity, run v can't depend on x or g x.
 Apparently id has type [x] - x. Are EHC and Hugs supposed
 to support equirecursive types?
 
 Brandon

Oops, this should have been a reply to the original thread in 
glasgow-haskell-users.

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] First order Haskell without Data

2007-04-18 Thread Brandon Michael Moore
On Thu, Apr 19, 2007 at 02:47:30AM +0100, Neil Mitchell wrote:
 Hi,
 
 I've been wondering what is essential to Haskell and what isn't. Not
 from a point of view of what could be removed from the language, but
 what could be removed from a Core language.
 
 Given the features of higher-order, laziness and data types:
 
 Laziness can be converted to higher-order functions

Is this a pure language? If so you have to lose asymptotic performance
in some cases, In More haste, less speed: lazy versus eager evaluation by
Richard Bird, Geraint Jones and Oege De Moor there's an example of
a function that can be implemented in linear time in a lazy language
but requires O(n log n) time in a strict pure language.

It doesn't matter if you just want to reason about results, and on the
other hand for an intermediate language I suppose you might prefer
to add state and explicitly manipulate the thunking.

Brandon 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why Perl is more learnable than Haskell

2007-04-11 Thread Brandon Michael Moore
On Wed, Apr 11, 2007 at 02:21:41PM +0100, Will Newton wrote:
 On 4/11/07, kynn [EMAIL PROTECTED] wrote:
 
 Perl is a large, ugly, messy language filled with quirks and 
 eccentricities,
 while Haskell is an extremely elegant language whose design is guided by a
 few overriding ideas.  (Or so I'm told.)
 
 Based on this one would think that it would be much easier to learn Haskell
 than to learn Perl, but my experience is exactly the opposite.
 
 I've been trying to learn Haskell for some time also, and I've learnt
 lots of various other languages in the past. I think one of the
 biggest problems is if there is a considerable learning curve, which
 Haskell undoubtedly has, there's a nagging question in the back of
 your head while you try and get a simple task accomplished in an
 unfamiliar language - why am I bothering with this, I could do it in
 5 minutes in Perl/Python/Ruby/...!.
 
 And for many simple tasks Perl is a really good fit - it's best to
 find a task that plays to Haskell's strengths so you get a bit of
 positive reinforcement while you work. I have been working with Parsec
 to do some parsing recently and I can definitely recommend it. I don't
 think I've used such a capable and easy to use parsing framework in
 any language and it's really kept me going with Haskell where I might
 have just done it in Python in the past.

Writing interpreters is one task where Haskell is really nice.
I suggest Unlambda, it makes a nice toy language. The syntax
is easy to work with, and continuations make the semantics
interesting enough that you can't just rely on the host language
acting the same way, like you generally can with mutable state,
sequential evaluation and so on (unless you're using something
like scheme or ml, but then you probably wouldn't have trouble
with Haskell).

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Profiling makes memory leak go away? Is Haskell a practical language?

2007-04-10 Thread Brandon Michael Moore
On Tue, Apr 10, 2007 at 11:03:32AM -0700, Oren Ben-Kiki wrote:
 On Tue, 2007-04-10 at 12:14 +0200, apfelmus wrote:
 Oren Ben-Kiki wrote:
  The code is in http://hpaste.org/1314#a1 if anyone at all is willing
  to take pity on me and explain what I'm doing wrong.
 
 There is an important point to note about streaming, namely that it
 conflicts with knowing whether the input is syntactically correct or
 not.
 
 True, this is  the core issue. Think of a turing machine processing an
 infinite tape. It is writing output and producing a final result; it is
 possible to examine the output tape before knowing the final result.
 Haskell parsers insist on having the output tape live in memory until
 the final result is known, and then give it to you as one huge object
 when done (or discard it and just give you the error message). NOT a
 good idea if your input tape is a 200 MB file!

It's nothing to do with Haskell or memory mangagement, you just can't decide
whether the whole input is well-formed until you're done parsing, just like
you can't in general decide if a Turing machine is going to terminate until
it has.

You have to accept not knowing whether the input is well-formed until you
get to the end. There are two ways to do this that make it easy to get
streaming right. One is to have a data structure that explicitly contains
the possiblity of errors, like

data ErrList a = Another a (ErrList a) | Done | Failed err

Another is to return an ordinary structure containing values that
will raise an error when examined, and wrap a catch around the code
processing the streaming results. You might return for example a result

[1,2,3,error parse error at 10:3 blah blah blah]

You chose the most difficult way, returning immediately a structure
that has a field that when examined blocks until the input is done
and tells you if everything is valid.

That's tricky becuase it's very easy to make that field be some
unevaluated code that hangs onto the complete list of tokens and so
on, something like (a thunk of)
first_line_valid  second_line_valid  ...

GHC doesn't just go out and evaluate thunks onces their dependencies
arrive, because sometimes that's a bad idea, most obviously it it's something
like an unevaluated infinite list, say [1..], which has no free parameters.

It's the same problem you see in

--argument to break sharing
input () = 'a' : input ()

main = let text = input() in putStr (text ++ [last text])

As the infinite list is unfolded the thunk for last text is still
hanging onto the beginning, so it can't be garbage collected.

It happens that you can incrementally compute length as the list is
unfolded, but it's somewhat beyond the compiler to figure that out
for itself. But, you can fix it by writing a function that does
both operations together:

list_followed_by_length l = rec l 0 where
  rec (x:xs) len = len `seq` (x:rec xs (len + 1))
  rec [] = show len

Another option, if you're determined to be fancy, is to use the one
case where GHC actually does decide to evaluate something a little
bit during garbage collection. It's called a selector thunk -
if a piece of unevaluated code is *exactly* (after optimization)
case x of (_, .. , projected, ... _) - projected, or an equivalent
pattern match on another data type with just a single constructor
it will be replaced by a direct reference to x as evaluation
proceeds.

If you want to go this way, add the -ddump-simpl flag to GHC and
inspect the output, and see what adding -O or -O2 does to it.

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Weaving fun

2007-04-10 Thread Brandon Michael Moore
On Wed, Apr 11, 2007 at 12:13:10AM +0200, Bas van Dijk wrote:
 Hello,
 
 For my own exercise I'm writing a function 'weave' that weaves a
 list of lists together. For example:
 
  weave [[1,1,1], [2,2,2], [3,3]] == [1,2,3,1,2,3,1,2]
  weave [[1,1,1], [2,2], [3,3,3]] == [1,2,3,1,2,3,1]
 
 Note that 'weave' stops when a list is empty. Right now I have:

If it wasn't for that, you could use

import Data.List(transpose)
weave :: [[a]] - [a]
weave = concat . transpose

e.g.
  weave [[1,1,1], [2,2], [3,3,3]] == [1,2,3,1,2,3,1,3]

Brandon 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Beginner's Question] How to read filenames from a DirStream

2007-04-09 Thread Brandon Michael Moore
It looks like all you can do with DirStream is get the filename, not
look at any other fields of the dirent - actually, it seems name
is the only standard field.  You might as well use
getDirectoryContents, unless you have a directory so huge that
a list of all the filenames takes too much memory!

Brandon Moore
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lazy IO and closing of file handles

2007-03-15 Thread Brandon Michael Moore
On Wed, Mar 14, 2007 at 06:05:31PM -0700, David Brown wrote:
 Greg Fitzgerald wrote:
 
  What we need is a library for a readonly filesystem.  That is, all
  the same functions but pure.  I believe you could make this readonly
  library by wrapping each readonly I/O function with
  'unsafeInterleaveIO' or 'unsafePerformIO', but I don't really
  understand the consequences of using 'unsafe' calls, so I avoided it
  myself.
 
 SlurpDirectory from darcs tries to do this.  It is likely just
 specific to darcs needs, but perhaps it would be a useful thing.
 
 I've thought about making something like this for the purposes of
 harchive, but concluded that I need too much control over when things
 happen.  Having a memory leak that tries to read an entire filesystem
 into memory isn't going to work well :-)
 
 Dave

I haven't looked at the darcs function, but for this problem something like

--tested
readFiles :: [FilePath] - IO [String]
readFiles files = return (map (unsafePerformIO . readFile) files)

or

--untested
readFiles2 files = mapM (unsafeInterleaveIO . readFile) files

probably works, as long as you read each file completely before going to the
next. If the thunk returned by readFile was so lazy as to not even open the
file before being forced you might not need this sort of thing at all
(but it couldn't report problems at the readFile site). If you don't process
each file completely you could make accessing any list element force earlier
files with something like this

serialize [] = []
serialize (a:as) = a:serialize' a as
  where serialize' a (b:bs) = a `seq` b : serialize' b bs
serialize' _ [] = []

Brandon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANN: HSH 1.2.0

2007-03-13 Thread Brandon Michael Moore
On Mon, Mar 12, 2007 at 05:14:57PM -0500, John Goerzen wrote:
 On 2007-03-06, Simon Marlow [EMAIL PROTECTED] wrote:
  John Goerzen wrote:
  possible to create a pipe going directly from program A to program B.
 
  You certainly can pipe directly from one process to another:
 
 That only works for 2 processes.  What if I have 4 processes, and want
 to pipe from one to the next, all along, as in
 
 ls -l /tmp | tr a-zA-Z | sort | md5sum
 
 to give a contrived example

You can do this with runProcess, if you use
System.Posix.IO.{createPipe,fdToHandle} to make a pipe and
wrap the ends as handles. I hope hCreatePipe could be
implemented on windows.

import System.IO
import System.Process
import System.Posix.IO

run program arguments stdin stdout = runProcess program arguments Nothing 
Nothing stdin stdout Nothing
hCreatePipe = do (readFd, writeFd) - createPipe
 read - fdToHandle readFd
 write - fdToHandle writeFd
 return (read, write)

main = do (read1, write1) - hCreatePipe
  run ls [-l,/tmp] Nothing (Just write1)
  (read2, write2) - hCreatePipe
  run tr [a-z,A-Z] (Just read1) (Just write2)
  (read3, write3) - hCreatePipe
  run sort [] (Just read2) (Just write3)
  (read4, write4) - hCreatePipe
  run md5sum [] (Just read3) (Just write4)
  hash - hGetContents read4
  putStr hash


Brandon Moore
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: RFC: termination detection for STM

2007-02-14 Thread Brandon Michael Moore
On Wed, Feb 14, 2007 at 10:04:32AM +, Simon Marlow wrote:
 Perhaps I'm missing something, but doesn't GHC already detect the kind of 
 deadlock you're talking about here?  When a thread is blocked and cannot be 
 woken up, it is sent the BlockedOnDeadMVar exception.  It's more precise 
 than the extension you propose, because the GC is used to check which 
 threads are unreachable and therefore cannot be woken up, so it can detect 
 mutual-deadlock between two threads in a system that contains other running 
 threads.

Perhaps the idea is specifically to detect from the outside when a group of
threads is deadlocked, maybe like something that can be done with computation
spaces in Oz, definitely like the way tree spaces work in Aardappel
( http://wouter.fov120.com/aardappel/ ).

Based on your description, it sounds like it wouldn't work very well to have
a parent thread waiting on a channel, with one of the child threads set up
to catch BlockedOnDeadMVar and send a message, lest the parent thread be
considered deadlocked and sent BlockedOnDeadMVar itself.

What are the semantics of the exception? It seems like it might be tricky
to provide any guarantees, if a thread can catch the exception and make the
MVar live again.

Brandon 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] How to implement `amb' operator?

2004-04-07 Thread Brandon Michael Moore
Keith is talking about a comitted choice style of nondeterminism, where
one of the arguments is picked and the computation continues from there.

If you want a computation with backtracking, or a list of all possibly
results then you should use the list monad, or another monad that supports
nondetermanism.

The tutorial All About Monads has a nice discussion of the list monad in
section two, another example (StateT with List) in section three, and
it's a good introduction to monads overall, if you need one.

If you wanted to pick x from 1 2 3 and y from 3 4 5 so that x squared
equals y you could write

results :: [(Int,Int)]
results = do x - [1,2,3]
 y - [3,4,5]
 guard $ x^2 == y
 return (x,y)

then results will be [(2,4)].

Brandon

On Wed, 7 Apr 2004, Keith Wansbrough wrote:

  Hi,
I think it's good idea to compute non-deterministic problems with the `amb'
  operator, just as in LISP/scheme. But how to implement it in haskell?

 Do you mean evaluate e1 and e2, and return the result of whichever
 returns first?

 Probably best to do this using threads.

 --KW 8-)
 --
 Keith Wansbrough [EMAIL PROTECTED]
 http://www.cl.cam.ac.uk/users/kw217/
 University of Cambridge Computer Laboratory.

 ___
 Haskell-Cafe mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell] deriving with newtypes

2004-04-06 Thread Brandon Michael Moore


On Fri, 2 Apr 2004, Simon Peyton-Jones wrote:

 Your word is my command. 'Tis done.

 Simon

 | -Original Message-
 | From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 On Behalf Of Wolfgang
 | Jeltsch
 | Sent: 21 March 2004 17:55
 | To: The Haskell Mailing List
 | Subject: [Haskell] deriving with newtypes
 |
 | Hello,
 |
 | I'm trying to use GHC's deriving mechanism for newtypes in the
 following way:
 | class C a b
 | instance C [a] Char
 | newtype T = T Char deriving C [a]
 | Unfortunately, this isn't possible. Is there a reason for this? Can I
 | circumvent this restriction?
 |
 | Wolfgang

It looks like this lets you use partially applied type classes in a
deriving clause, always apllying that class to the new type last. This
looks nice, but wouldn't work so well if your newtype was supposed to go
first

class C a b
instance Char [a]
newtype T = T Char deriving C ??

Maybe a deriving clause should allow full instance heads instead as well
as class names, so you could write deriving C T [a]. (Maybe with
some restrictions, like ensuring the new type appears, or is one of the
class arguments).

It seems more regular to allow you to derive an instance of a
multi-paramater typeclass with your class in any position rather than just
the last.

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] What is the best way to write adapters?

2004-03-11 Thread Brandon Michael Moore


On Thu, 11 Mar 2004 [EMAIL PROTECTED] wrote:

 Thanks! Oleg.

 This works and it looks nice!

 And now, my code can be like:

 class FwdSig d where
   (forall a. Sig a = a - w) - d - w

 All the types that supports such forwarding are instances of FwdSig.

 My Def type is:

 instance FwdSig Def where
   fwd f (ClassDef c) = f c
   fwd f (ProtDef p) = f p

 instance Sig Def where
   getName = fwd getName
   getMethods = fwd getMethods
   ...

 My Native type is:

 instance FwdSig Native where
   fwd f (NativeSignature s) = f s
   fwd f (NativeProtocol p) = f p

 instance Sig Native where
   getName = fwd getName
   getMethods = fwd getMethods
   ...

 Many annoying forwarding functions are gone.

 The only thing that I hope to be better is this getXXX = fwd getXXX piece
 of code. Is it possible to reuse the same piece of code for both Native and
 Def and any other possible types?

I'm not as handy with the type system as Oleg, but I can help out here.

The problem with your new instance is that if the compiler is trying to
see if Native is an instance of Sig, it can start with the declaration Sig
Native, or the declarations FwdSig a = Sig a, both of which could
potentially derive an instance of Sig Native.

Additionally passing the -fallow-overlapping-instances flag will permit
you to compile a program where instances overlap like this, and will
select the most specific matching instance (looking only at the head).
Your code is fine.

Brandon

 Inspired by your generic code, I wrote such thing:

 instance FwdSig d = Sig d where
   getName = fwd getName
   getMethods = fwd getMethods
   ...

 However, my ghc complains about the use of Sig d.

 I followed its recommendation and put -fallow-undecidable-instances flag
 with the surprise that the FwdSig d=Sig d instance declaration conflicts
 with my other instance Sig XXX declarations.

 Surely this is not a serious problem, I can live with repeating the
 getXXX=fwd getXXX several times. Just curious about how further this can
 go.

 Ben.


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Re: [Haskell-cafe] matching constructors

2004-03-08 Thread Brandon Michael Moore
I think the generics approach really is overkill here, but it's nice to
know the generics library.

For option processing Tomasz Ziolonka described a nice technique
in the post I refered to. You can find the post in the archives at
http://www.haskell.org//pipermail/haskell/2004-January/013412.html

The big example at the end of his post seems to have exactly the otpion
structure you want, with input, output, a verbose flag, and a (composable)
selection of filters to use.

The basic idea is to make a record containing the options in their most
useful form and make each options descriptor (I assume you are using
(System.Console.)GetOpt here) return a function that transforms an option
record to reflect that option. Now to handle the list of values you get
back you just apply each transformer in turn to the default options.

It somewhat resmbles building up option values in a collection of mutable
variables, although of course values are rather more flexible in Haskell
than most other languages, and the state is encapsulated and well
behaved.

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] matching constructors

2004-03-05 Thread Brandon Michael Moore


On Fri, 5 Mar 2004, Vadim Zaliva wrote:

 Hi!

 I am new to Haskell, but I have background in various programming
 languages (including Lisp)

 I have very basic question, if there is a way to match algebraic types
 constructors besides
 use of pattern matching. I wrote today code like this:

 data Flag = Verbose |
  Input String  |
  Output String |
  Filter String
  deriving Show

 instance Eq Flag where
  Verbose  == Verbose  = True
  Input  _ == Input  _ = True
  Output _ == Output _ = True
  Filter _ == Filter _ = True
  _ == _ = False

 findFlag :: Flag - [Flag] - Flag
 findFlag f [] = f
 findFlag y (x:xs) = if x == y then x else findFlag y xs

 If there is a cleaner way to implement findFlag and Eq Flag?

 Vadim

It looks like you are doing some sort of option processing. There have
been some suggestions recently for better approaches on the haskell list.
See the thread High-level technique for program options handling from
January.

The basic idea there is to define a record type holding your options in
the most useful form for your program, and turn your arguments into
functions that transfrom such a record, rather than values you need to
analyze. The example in Tomasz Zeilonka's post has exactly the option
structure of your code.



At the lower level of remimplementing your functions I can suggest a few
things.

It's probably overkill, but the Data typeclass provides an operation that
takes a value and returns the constructor used. Add Data to the list of
typeclasses you want to derive, and you can write your equality function
as (GHC only)

 x == y = toConstr x == toConstr y

For your findFlag function, the library function find is nearly what
you want, except it returns a value in Maybe rather than taking a default:

findFlag :: Flag - [Flag] - Maybe Flag
findFlag f fs = find (==f) fs

You could use the maybe function (Maybe a - b - (a-b) - b) to supply
the default.


Brandon

 --
 La perfection est atteinte non quand il ne reste rien a ajouter, mais
 quand il ne reste rien a enlever.  (Antoine de Saint-Exupery)

 ___
 Haskell-Cafe mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell] Per-type function namespaces (was: Data.Set whishes)

2004-02-27 Thread Brandon Michael Moore


On Fri, 27 Feb 2004, Simon Peyton-Jones wrote:

  The idea that I've been throwing around is to be able to define a
  separate namespace for each type; a function can either belong in a
  global (default) namespace, or belong in a particular type's
  namespace.  So, in the above example, instead of writing addToFM fm
  ..., we could instead associate an 'add' function with the FiniteMap
  type, so we could write fm.add ... instead.  Provided that fm's type

  is monomorphic, it should be possible to call the 'correct' add
  function; if we defined another 'add' function that's associated with

 Remember, too, that in OO languages the type of 'fm' is usually
 declared, in advance, by the programmer.  In Haskell it isn't.   That
 makes it much harder to figure out which 'add' function is going to be
 used.

 Which 'add' function is chosen depends on type type of 'fm'.  But the
 add function that is chosen in turn influences the type of the other
 arguments.  For example, in the call (fm.add foo), the type of 'foo' is
 influenced by the choice of 'add'.  But the type of 'foo' might (by the
 magic of type inference) affect the type of 'fm'

 In Haskell today, you can at least tell what value is bound to each
 identifier in the program, *without* first doing type checking.  And the
 binding story, all by itself, is somewhat complicated.  The typing story
 is also (very) complicated.  Winding the two into a single indissoluble
 whole would make it far more complicated still.

I thought this wasn't the case if there are type classes invovled. What
value is + bound to in 1 + 1? All I can think is to say that the
appropriate value of + is selected based on the types, or to say that the
value here is the class member (subsuming several instances). Either way
I don't see a method for overloading individual function names having a
greatly different story either way.

Actually, picking a version of a function (from the versions in scope)
based on which type actually works might be useful. It seems to extend the
handling of overlapping names in a useful direction again, resolving
ambiguity by assuming you meant to write a typeable program.

We would probably want some special syntax with the imports to
request/flag this behaviour, like import A; import B; import C; resolve
foo. One heuristic would be typechecking with no information on the
name(s) and checking that there is a unique way to resolve the ambiguity
at each point.

 My nose tells me that this way lies madness.

I think the general principle of using types to capture and infer intent
is still sound. It would be nice to have ad-hoc overloading also in cases
where we don't see a common intent between several functions to capture
with a typeclass (intents that we can't capture are arguments for
improving the class system).

A lot of haskell already looks like madness already anyway :)
We just need to find things that look like good madness ;)


 But I've been wrong before.

 Simon

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Per-type function namespaces (was: Data.Set whishes)

2004-02-26 Thread Brandon Michael Moore
On Fri, 27 Feb 2004 [EMAIL PROTECTED] wrote:

 On 27/02/2004, at 1:13 PM, [EMAIL PROTECTED] wrote:

 1) now I have to manually declare a class definition for every single
 function, and I have to declare it in advance before any module defines
 that function (most serious problem; see below),

 2) I then have to declare instances of that type class for every
 function I define,

 3) the type signature for phase reveals no clues about how to use that
 function.

Declaring a type class instance is really no problem. You just need to
write an instance Class (Type) instead of function :: Type on the line
before the function declaration. The type on phase itself wouldn't
provide much information, but the list of instances in each module defines
would be informative. Something like :info wouldn't be much help without
modification.

 So unfortunately, this is hardly a scalable solution.  The entire
 reason I came up with the idea is because if we use type classes to
 implement this sort of overloading, we have to know every single
 possible function that any module author will ever create, and declare
 classes for those functions in advance.  This is fine if you're
 declaring truly polymorphic functions which are designed from the start
 to be totally general, but it is not designed for functions which may
 do vastly different things and may contain totally different type
 signatures, but share the same name because that would be a sensible
 thing to do.  (e.g. the phase function mentioned above.)

In the paper Object-Oriented Style Overloading for Haskell, Mark Shields
and Simon Peyton-Jones. One of the things they propose is adding method
constraints to the type system which (as far as I can tell) basically
amounts to generating a type class for each funtion name, and letting
you write constraints like (foo :: Int - Int) on your function.

They would set up the type classes like class Has_foo a where foo :: a,
which can causes problems if your argument and return value are
polymorphic under a class constraint rather than concrete types. Making
the method classes implicitly closed would probably help here. (closed
classes are another suggestion). While making that closed world assumption
it would probably be nice if it only selected between the versions of the
function that were actually in scope at the moment (so these would act
kind of like methods that overload if you import several of them, rather
than conflicting like normal).

As long as we are integrating these special type classes into the language
we can make sure things like error messages and ghci give decent
information, maybe listing all the different types the function is
imported at, and where each version is defined.

 With the per-type namespace separation I'm advocating, you do not need
 to know and declare in advance that each function will be overloaded,
 you simply write a FiniteMap.add function and a Set.add function, and
 you get a simpler form of namespace separation (or overloading) based
 on the first parameter that's passed to it.  It is a solution which is
 more _flexible_ than requiring type class definitions, and it is better
 than having hungarian notation for functions.  In fact, I think that,
 right now, if we replaced the current namespace separation offered by
 the hierarchical module system, and instead only had this sort of
 per-type namespace separation, things would still be better!

How much of the structure of the first paramater would you look at? Could
you an implementation for pairs that depended on the actual types in the
pair? I think you should try to take advantage of the existing type class
machinery as much as possible here, even if what you want are not exactly
(standard) type classes.

 I realise my idea isn't very general in that it only allows this
 namespace lookup/overloading based on the type of a single argument
 parameter, and I think it would be possible with a bit more thinking to
 generalise it to work based on multiple arguments (e.g. via
 argument-dependent lookup, or whatnot).  But even in its current form,
 I honestly think it offers far more flexibility and would lead to
 cleaner APIs than is currently possible.

Read the paper and see if you think something like that might be useful.
In any case, I think there's a decent chance that something useful for
this would also be useful for building interfaces to object-oriented
libraries, and vicea versa. I think there's probably something that covers
both cases nicely and uniformly.

Brandon

 --
 % Andre Pang : trust.in.love.to.save
 ___
 Haskell mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: type classes, superclass of different kind

2003-12-11 Thread Brandon Michael Moore


On Thu, 11 Dec 2003, Robert Will wrote:

 Hello,

 As you will have noticed, I'm designing a little library of Abstract Data
 Structuresm here is a small excerpt to get an idea:

 class Collection coll a where
 ...
 (+) :: coll a - coll a - coll a
 reduce :: (a - b) - b
   - coll a - b
 ...

 class Map map a b where
 ...
 (+) :: map a b - map a b - map a b
 at :: map a b
   - a - b
 ...

 Note that the classes don't only share similar types, they also have
 similar algebraic laws: both + and + are associative, and neither is
 commutative.

 Now I would like to have Collection to be a superclass of Map yielding the
 following typing

 reduce :: (Map map a b) =
   ((a, b) - c) - c
   - map a b - c

Functional dependencies will do this.

class Collection coll a | coll - a where
...
(+) :: coll - coll - coll
reduce :: (a - b - b) - b - coll - b
...

class (Collection map (a,b)) = Map map a b | map - a b where
...
(+) :: map - map - map
at :: map - a - b

Now you make instances like

instance Collection [a] a where
   (+) = (++)
   reduce = foldr

instance (Eq a) = Map [(a,b)] a b where
   new + old = nubBy (\(x,_) (y,_) - x == y) (new ++ old)
   at map x = fromJust (lookup x map)


 Note that in an OO programming language with generic classes (which is in
 general much less expressive than real polymorphism), I can write

 class MAP[A, B] inherit COLLECTION[TUPLE[A, B]]

 which has exactly the desired effect (and that's what I do in the
 imperative version of my little library).

This isn't exactly the same thing. In the OO code the interface
collections must provide consists of a set of methods. A particular
type, like COLLECTION[INTEGER] is the primitive unit that can implement
or fail to implement that interface.

In the Haskell code you require a collection to be a type constructor that
will give you a type with appropriate methods no matter what you apply
it to (ruling out special cases like extra compace sequences of booleans
and so on). A map is not something that takes a single argument and makes
a collection, so nothing can implement both of your map and collection
interfaces.

The solution is simple, drop the spurrious requirement that collections
be type constructors (or that all of our concrete collection types were
created by applying some type constructor to the element type). The
classes with functional dependencies say just that, our collection type
provides certain methods (involving the element types).

Collections were one of the examples in Mark Jones' paper on
functional dependencies (Type Classes with Functional Dependencies,
linked from the GHC Extension:Functional Dependencies section of the
GHC user's guide).

 There seems to be no direct way to achieve the same thing with Haskell
 type classes (or any extension I'm aware of).  Here is a quesion for the
 most creative of thinkers: which is the design (in proper Haskell or a
 wide-spread extension) possibly include much intermediate type classes and
 other stuff, that comes nearest to my desire?

 I believe this question to be important and profound.  (We shouldn't
 make our functional designs more different from the OO ones, than they
 need to be.)  If I err, someone will tell me :-

What problems do objects solve? They let you give a common interface to
types with the same functionality, so you can make functions slightly
polymorphic in any argument type with the operations your code needs.
They organize your state. Then let you reuse code when you make a new
slightly different type. Am I missing anything here?

I think type classes are a much better solution than inheritance for
keeping track of which types have which functionality. (at least the way
interface by inheritance works in most typed and popular object oriented
languages.)

Inheritance only really works for notions that only involve the type doing
the inheriting, or are at least heavly centered around that type. I don't
think Functor can be represented as an interface, or at least not a very
natural one. Most langauges I know of (see Nice for an exception)  also
require you to declare the interface a class supports when you declare it,
which is really painful when you want your code to work with types that
were around before you were, like defining a class to represent
marshallable values for interface/serialization code.

Are there any advantages to inheritance for managing interfaces? Maybe
it takes a few minutes less to explain the first time around. It's
probably easier to implement. Beyond that, I see nothing. Any creative
thinkers want to try this? (An answer here would motivate an extension
to the type class system, of course).

Brandon

 Robert

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Multiple functions applied to a single value

2003-12-09 Thread Brandon Michael Moore

Control.Monad.Reader defines instances of Monad and MonadReader for
((-) r). Strangely enough, the documentation claims the Monad instance
comes from Control.Monad, which is untrue.

Here's the relevant chunk of the file. It looks like you came up with
exactly the same code (modulo names).

-- 

-- The partially applied function type is a simple reader monad

instance Functor ((-) r) where
   fmap = (.)

instance Monad ((-) r) where
   return  = const
   m = k = \r - k (m r) r

instance MonadFix ((-) r) where
   mfix f = \r - let a = f a r in a

instance MonadReader r ((-) r) where
   ask   = id
   local f m = m . f

Brandon

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Functional dependencies interfere with generalization

2003-11-27 Thread Brandon Michael Moore


On Wed, 26 Nov 2003, Ken Shan wrote:

 Hello,

 Consider the following code, which uses type classes with functional
 dependencies:

 {-# OPTIONS -fglasgow-exts #-}
 module Foo where
 class R a b | a - b where r :: a - b

 -- 1
 rr :: (R a b1, R a b2) = a - (b1, b2)
 rr a = (r a, r a)

 -- 2
 data RAB a = RAB (forall b. (R a b) = (a, b))
 mkRAB :: (R a b) = a - b - RAB a
 mkRAB a b = RAB (a, b)

 Neither 1 nor 2 passes the type-checker (GHC 6.0.1).  The error messages
 are similar:

I agree that the typechecker could handle this better, but I don't see why
you should need types like this. You should be able to use

rr :: (R a b) =  a - (b,b)

and

data RAB a = forall b. (R a b) = RAB (a,b)

equally well, and these typecheck.

I think the root of the problem is the foralls. The typechecker doesn't
realize that there is only one possible choice for thse universally
quantified values based on the functional dependencies. For rr it
complains because you can't allow every b2, just b2 = b1, not realizing
that that is already implied by the class constraints. Similarly for RAB
it complains because the pair you give it is obviously not unviersally
polymorphic in b, not realizing that there is only one choice for b
consistent with the class constraints. Compare this code:

class R a b where r :: a - b

rr :: (R a b1, R a b2) = a - (b1, b2)
rr x = let rx = r x in (rx, rx)

and

data P a = P (forall b. (a,b))

Off the top of a my head, the solution to this problem would probably be
something like ignoring foralls on a type variable that is determined by
fundeps, but I think the type system would need some sort of new
quantifier or binding construct to introduce a new type variable that is
completely determined by its class constraints. Something like forall a .
constrained b. (R a b) = a - (b, b). A forall binding a variable
determined by fundeps could be reduced to a constrained binding, which
would be allowed to do things like unify with other type variables.

I'm not sure anything really needs to be done. I think you can always
type these examples by unifying the reduntant type variables in a
signature by hand, and by using existentially quantified data types
rather than universally quantified ones. Do you have examples that
can't be fixed like this?

Brandon



 Inferred type is less polymorphic than expected
 Quantified type variable `b2' is unified with another quantified type 
 variable `b1'
 When trying to generalise the type inferred for `rr'
 Signature type: forall a b1 b2.
 (R a b1, R a b2) =
 a - (b1, b2)
 Type to generalise: a - (b1, b1)
 When checking the type signature for `rr'
 When generalising the type(s) for `rr'

 Inferred type is less polymorphic than expected
 Quantified type variable `b' escapes
 It is mentioned in the environment:
   b :: b (bound at Foo.hs:17)
 In the first argument of `RAB', namely `(a, b)'
 In the definition of `mkRAB': mkRAB a b = RAB (a, b)

 In both cases, the compiler is failing to make use of functional
 dependencies information that it has at its disposal.  Specifically,
 it seems to me that, if two type variables b1 and b2 have been unified
 due to functional dependencies, making two constraints in the context
 identical, then the inner constraint (inner with respect to the scope
 of quantified type variables) should be ignored.

 Is there a technical reason why the type checker should reject the code
 above?  Would it be possible to at least automatically define a function
 like

 equal :: forall f a b1 b1. (R a b1, R a b2) = f b1 - f b2

 for every functional dependency, with which I would be able to persuade
 the type checker to generalize (Baars and Swierstra, ICFP 2002)?  I
 suppose I can use unsafeCoerce to manually define such a function... is
 that a bad idea for some reason I don't see?

 Thank you,
   Ken

 --
 Edit this signature at http://www.digitas.harvard.edu/cgi-bin/ken/sig
 Tax the rich!
 new journal Physical Biology: http://physbio.iop.org/
 What if All Chemists Went on Strike? (science fiction):
 http://www.iupac.org/publications/ci/2003/2506/iw3_letters.html


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: type class problem / GHC bug

2003-11-11 Thread Brandon Michael Moore


On Mon, 10 Nov 2003, Simon Peyton-Jones wrote:

 | Also, I tried to update and rebuild, but the makefiles seem to have
 the
 | dependencies wrong or something. I compiles THSyntax.hs by hand, then
 ran
 | into some trouble with files that needed some modules from GHCI trying
 | (and dying) to build before the ghci files.Is there a guide to
 building
 | GHC from CVS anywhere? I had the same problem with alex, but that's
 | small enough to build by hand.

 Should be ok... are you sure you are up to date?  Make sure you give
 -d to cvs update, so you get the new TH subdirectory of
 Haskell-src/Language/Haskell

I've done a clean rebuild, which fails at
HscTypes (in compiler/main) which depends on ByteCodeAsm (in
compiler/ghci), which hasn't been built yet. It looks like ghci
may be missed in dependency generation, based on grep
not finding ghci in .depend-1.


___
Glasgow-haskell-bugs mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


type class problem / GHC bug

2003-11-08 Thread Brandon Michael Moore
Hi everyone

I've built GHC from CVS and I'm getting some odd errors about overlapping
instances. This is different from 6.0.1, but it's not obvious it is wrong,
so I'm probably missing something here.

The example is

class A x
class (A x) = B x
instance A x
instance B x

The new GHC complains that the second instance overlapps with the first.
Maybe because of the context on B x the instance for B x is interpreted as
a claim we have A x too, but shouldn't it be the other way, that you need
an instance A x from somewhere along with an instance for B x?

Also, I tried to update and rebuild, but the makefiles seem to have the
dependencies wrong or something. I compiles THSyntax.hs by hand, then ran
into some trouble with files that needed some modules from GHCI trying
(and dying) to build before the ghci files.Is there a guide to building
GHC from CVS anywhere? I had the same problem with alex, but that's
small enough to build by hand.

Thanks

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: automaticly create the ana, cata, hylomorphisms

2003-11-08 Thread Brandon Michael Moore


On Fri, 7 Nov 2003, Marta Isabel Oliveira wrote:

 Ok, i read the page but i'm still stuck.

 I'm reading some papers about cata,  ana and hylomorphisms but i need to know where 
 to start putting it to code.

 So,

 1. in order to have a pre-processor, i need to have a module with ALL data types?
 2. from this module, how could i create the cata, ana,...

 i mean,

 a)how can i represent the result in haskell
 b) how can i create the cata(whatever) without knowing all of the possible functions 
 that may be inserted in order to it's cata be created?

 thansk a lot,

 lesina.


You might want to write this as a module for DrIFT. That already has code
for parsing data types and writing generated code. Once you the parsing
and pretty printing you just need to extract the AST for the morphisms
from a nice data type representing the type declaration. That should be
a simple function to write. It's a different story if you don't know what
functions you are supposed to be generating of course. You know how to
write the morpisms by hand for a given type, right?

Brandon

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: Type tree traversals [Re: Modeling multiple inheritance]

2003-11-06 Thread Brandon Michael Moore


On Wed, 5 Nov 2003, Simon Peyton-Jones wrote:

 | More overlapping:
 | Allow any overlapping rules, and apply the most specific rule that
 | matches our target. Only complain if there is a pair of matching
 | rules neither of which is more specific than the other.
 | This follow the spirit of the treatment of duplicate imports...

 Happy days.  I've already implemented this change in the HEAD.  If you
 can build from source, you can try it.

Great. But I can't build from the source: I'm getting errors about a
missing config.h.in in mk. I'm just trying autoconf, comfigure. I'll look
closer over the weekend.

 | Backtracking search:
 | If several rules matched your target, and the one you picked didn't
 | work, go back and try another.
 |
 | This isn't as well through out: you probably want to backtrack through
 all
 | the matching rules even if some are unordered by being more specific.
 It
 | would probably be godd enough to respect specificity, and make other
 | choices arbitrarilily (line number, filename, etc. maybe Prolog has a
 | solution?). This probably isn't too hard if you can just add
 | nondeterminism to the monad the code already lives in.

 I didn't follow the details of this paragraph.  But it looks feasible.

It's an unclear paragraph. I meant that if we are just looking for the
first match, we should try more specific rules before less specific rule.
That doesn't give us a complete ordering so we might do something
arbitrary for the rest, unless there is a better solution.

I think we should make sure that there are not multiple solutions, but we
want more specific rules to take priority. Order the solutions
lexicographically by how specific each rule in the derivation was and
complain if there isn't a least element in this set of solutions.  To
implement, if at each step there is a most specific rule in the set we
haven't tried, and making that choice at every step gives us a solution,
we know we have the most specific solution and don't need to keep
searching.

I don't want to be too strict about having a unique solution because
that can prevent modelling multiple inheritance

Brandon

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: Type tree traversals [Re: Modeling multiple inheritance]

2003-11-04 Thread Brandon Michael Moore
On Tue, 4 Nov 2003, Simon Peyton-Jones wrote:


 | We really should change GHC rather than keep trying to work around
 stuff
 | like this. GHC will be my light reading for winter break.

 Maybe so.  For the benefit of those of us who have not followed the
 details of your work, could you summarise, as precisely as possible,
 just what language extension you propose, and how it would be useful?  A
 kind of free-standing summary, not assuming the reader has already read
 the earlier messages.

 Simon

There are two extensions here:

More overlapping:
Allow any overlapping rules, and apply the most specific rule that
matches our target. Only complain if there is a pair of matching
rules neither of which is more specific than the other.

This follow the spirit of the treatment of duplicate imports, and
lets you do more interesting computations with type classes.
For example, the sort of type class hack Oleg and I have been writing much
easier. You use nested tuples to hold a list of values your search
is working over, have a rule that expands the head to a list of
subgoals, a rule that flattens lists with a head of that form,
and an axiom that stops the search if the head has a different
form, without needing the stop form to unify with a pair.

This extension would accept the code I just posted, and seems pretty
conservative.

Backtracking search:
If several rules matched your target, and the one you picked didn't
work, go back and try another.

This isn't as well through out: you probably want to backtrack through all
the matching rules even if some are unordered by being more specific. It
would probably be godd enough to respect specificity, and make other
choices arbitrarilily (line number, filename, etc. maybe Prolog has a
solution?). This probably isn't too hard if you can just add
nondeterminism to the monad the code already lives in.

This would give you OR. The example Integral a = MyClass a,
Fractional a = MyClass a would work just fine and give you a class that
is the union of integral and fractional. This class hierarchy search
could be done by a SubClass class that had an instance linking a class
to each of it's different parents, then the search just needs to backtrack
on which parent to look at:

class SubClass super sub

instance SubClass A C
instance SubClass B C

class HasFoo cls
  foo :: cls - Int
instance (SubClass super sub,HasFoo super) = HasFoo sub
instance HasFoo B

now look for an instance of HasFoo D
  uses first rule for HasFoo,.
  Needs an instance SubClass x D. Tries A, but can't derive HasFoo A.
  GHC backtracks to trying B as the parent, where it can
  use the second instance for HasFoo and finish the derivation.

Overloading resolution:
This one is really half-baked, but sometimes it would be nice if there was
some way to look at

class MyNumber a where
  one::a
instance MyNumber Int where
  one = 1

then see (one+1) and deduce that the 1 must have type Int, rather than
complaining about being unable to deduce MyNumber a from Num a. This is
really nice for some cases, like a lifting class I wrote for an Unlambda
interpreter, with instances for LiftsToComb Comb and (LiftsToComb a =
LiftsToComb (a - Comb)). With some closed world reasoning lift id and
lift const might give you I and K rather than a type error. Also, for
this work with modelling inheritance you almost always have to give type
signatures on numbers so you find the method that takes an Int, rather
than not finding anything that takes any a with Num a. This obviously
breaks down if you have instances for Int and Integer, and I don't yet
know if it is worth the trouble for the benefits in the cases where it
would help. Implementation is also a bit tricky. I think it requires
unifying from both sides when deciding if a rule matches a goal.

Improvements and better suggestions welcome. I'm only particularly
attached to the first idea.

Brandon

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Type tree traversals [Re: Modeling multiple inheritance]

2003-11-03 Thread Brandon Michael Moore
Thanks for the clever code Oleg. I've tried to extend it again to track
the types of methods as well as just the names, giving a functional
dependancy from the class, method, and to result type. I can't get the
overlapping instances to work out, so I'm handing it back to a master,
and the rest of the list.

We really should change GHC rather than keep trying to work around stuff
like this. GHC will be my light reading for winter break.

The core of the classes are here:

--records superclasses and new methods.
class Interface super sub | sub - super
--This has any new methods/overloadings, as well as superclasses.
instance Interface (Foo Int Bool,(Bar Bool Int,(ClassC,(ClassA,() ClassB

--the worker type class to search the ancestors for a method.
--Ancestors Have Method
class AHM objs (method :: * - * - *) args result | objs method args - result

--the first two instances conflict.
instance AHM (m a r,x) m a r
instance (AHM (x,(y,cs)) m a r) = AHM ((,) x y,cs) m a r
instance (AHM cs m a r) = AHM ((),cs) m a r
instance (Interface items c, AHM (items,cs) m a r) = AHM (c,cs) m a r

The instances  AHM (m a r,x) m a r
and AHM ((,) x y,cs) m a r)
are conflicting.
Again, I'm willing to compute the inheritance once and have a tool write
out instances for each overloading availible at each class, but it's just
so much cooler to do this in the typeclass system.

For anyone who hasn't been following this, the problem is a java
interface. There are several classes, in a DAG. At several points
in the DAG methods are declared, with an argument type and a return
type. I want some statically checked way of resolving a call with the
name, an object, and an argument list to a particular declaration of
the method with the same arguments in one of the ancestors of the
class. Bonus points for a functional dependancy from class+arguments
to result.

The practical upshot is being able to write code no more complicated than
the java you are replacing:
  do frame - new_JFrame ()
 set_size frame (10,100)
 set_visible frame True
 ...
vs.
  do frame - new_JFrame ()
 set_size_JFrame_JInt_JInt_JVoid frame (10,100)
 set_visible_JFrame_JBool_JVoid frame True
 ...
and fun things like functions that work on any object with the correct
interface, not just descendants of some particular class (hey, it's
neat for statically-typed OO languages, okay?)

Brandon

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: listProduct -- is this a standard function?

2003-10-17 Thread Brandon Michael Moore
I'm pretty sure this is sequence.

Brandon

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Using field selectors in a type constructor

2003-10-14 Thread Brandon Michael Moore
On Mon, 13 Oct 2003, Graham Klyne wrote:

 I've run across a minor coding niggle on a couple opf accosions when using
 a type constructor with field selectors.  The full code of my test case is
 below.  The value 'test2' evaluates to True.

 The function that niggles me is this:

 [[
 joinVarBindings :: (Eq a) = VarBinding a b - VarBinding a b - VarBinding a b
 joinVarBindings vb1 vb2
  | vbNull vb1 = vb2
  | vbNull vb2 = vb1
  | otherwise  = VarBinding
  { vbMap  = mv12
  , vbEnum = map (\v - (v,fromJust (mv12 v))) $
 boundVars vb1 `union` boundVars vb2
  , vbNull = False
  }
  where
  mv12 = headOrNothing . filter isJust . flist [ vbMap vb1, vbMap vb2 ]
 ]]

 Is it really necessary to define mv12 as a separate where clause here?

 What I'd really like to do is assign it to field vbMap, and reference that
 from the definition of vbEnum, but I can't figure out if there's a way
 to do so.  Writing this:


 Results in a fairly obvious type error:  I'd need to have a way to say that
 vbMap is applied to the value under construction.  Experience with Java would
 suggest maybe something like this:
 [[
  , vbEnum = map (\v - (v,fromJust (vbMap this v))) $
 ]]
 but of course Haskell isn't Java.

The natural way to do this is to apply vbMap to the value under
construction, Haskell being lazy and all. Of course this requires
naming the variable under construction it hardly makes a difference
when there is only one subexpression.

[[
joinVarBindings :: (Eq a) = VarBinding a b - VarBinding a b - VarBinding a b
joinVarBindings vb1 vb2
 | vbNull vb1 = vb2
 | vbNull vb2 = vb1
 | otherwise  = let vb = VarBinding
 { vbMap  = headOrNothing . filter isJust . flist [ vbMap vb1,
vbMap vb2 ]
 , vbEnum = map (\v - (v,fromJust (vbMap vb))) $
boundVars vb1 `union` boundVars vb2
 , vbNull = False
 }
]]

This should work.

Brandon.

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: constructor name clashes

2003-10-08 Thread Brandon Michael Moore

On Wed, 8 Oct 2003 [EMAIL PROTECTED] wrote:

 Hi, all.

 I'm a newbie to Haskell so please bear with me if my questions sound silly.

 In coding Haskell, I feel very inconvenient that the name of data
 constructors for different types have to be different.

 Also, when declaring named fields of a type, such as
 data Data1 = Data1{ok1::Bool}
 data Data2 = Data2{ok2::Bool}
 the field names for different type also have to be unique.

 Isn't that annoying? Keeping all the names unique is no easy task in my
 opinion. Wouldn't it be nice if we can have something similar to structure
 fields in C? Or maybe this is already present and I'm just being ignorant?

There has been a lot of work on a real record system, but none has made it
into GHC. HUGS has TREX, which is based on row variable polymorphism and
seems fairly powerful. O'Haskell does something based on subtyping IIRC.
The big problem is being expressing functions that add or remove fields
from a record. If you are just interested in fetching or setting fields
you can use type classes:

data A = A Int String --A {foo::Int, bar::String}
data B = B Bool --B {foo::Bool}

class HasFooField t a | t - a where
  foo :: t - a
  updateFoo :: t - a - t

instance HasFooField A Int where
  foo (A f _) = f
  updateFoo (A _ b) v = A v b

instance HasFooField B Bool where
  foo (B b) = b
  updateFoo (B _) v = B v

You could probably do something with template haskell that would
generate all the instances for you, but you would still need to import
the classes from somewhere.

$(struct [d[data A = A1 {foo::Int, bar::String} | A2 {foo::Int} ]])
$(struct [d[data B = B {foo::Bool} ]])

There was talk about compiler support for a system like this, and I think
the main obstacle is convincing somebody to do it, which mostly means
demonstrating that it's useful, because there's no interesting theory
here. It also makes code a bit slower, but fields are not that common
so it shouldn't be much of an issue.

Do we really lose that much to the name overloading restrictions?

The only use I can think of for records used as big bags of tagged values
is for passing configuration options to complicated libaries, or getting
back a bunch of values from something like parsec's token parser
generator. This isn't very common, because usually any powerful function
in Haskell is build from a layer of smaller functions that made the final
function ridiculously easy to write. If this is true for some library, the
author can either write gobs of boring option handling code that tries to
expose some fraction of this power, or just extend their export list a bit
and give the user the primitives to play with themselves.

I don't think constructor name clashes come up that often either. There
are a few names like Leaf or Empty that you want to use all the time
if you are defining trees and things, but that's more playing around with
trivial data structures than writing real code. Two cases I've run into
are successive program representations in a compiler (everything from
raw tokens though Core has an if), and using having an exploded
representation of a type as the fixpoint of a functior for interesting
recursion scheme uses, and a normal version so I can avoid the tagging.

It would probably be feasible to figure out which constructor you mean
in a function like

desugar :: ASTSyntax - M IntermediateSyntax
desugar (IF c t e) = If (f c) (f t) (f e)
desugar (ListComprehension head gens) = desugarListComp head gens
...
...

But other cases are much harder (and probably much less sensible).
How do you infer a type for something like this?

f (C x) = case x of (C x) - case x of (C x) - case x (C x) - x

I think the typing rules would have to be based on something other than
unification and generalization like the rest of the type system, and would
probably introduce all sorts of annoying restrictions about how you are
allowed to use constructors, or how to resolve the type.


Brandon

 I know that we can use modules to introduce name spaces. But still, this is
 quite cumbersome.

 Thanks!

 Ben.



 This message is intended only for the addressee and may contain information
 that is confidential or privileged. Unauthorized use is strictly prohibited
 and may be unlawful. If you are not the intended recipient, or the person
 responsible for delivering to the intended recipient, you should not read,
 copy, disclose or otherwise use this message, except for the purpose of
 delivery to the addressee. If you have received this email in error, please
 delete and advise us immediately.


 ___
 Haskell mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell



___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Modeling multiple inheritance

2003-09-27 Thread Brandon Michael Moore
On Fri, 26 Sep 2003 [EMAIL PROTECTED] wrote:

 Brandon Michael Moore wrote regarding the first solution: chain of
 super-classes:

  I'm worried about large class hierarchies. If it works on the
  java.* classes I should be fine. Have you used this approach before? I'm
  worried about compile time, runtime costs from the casts (hopefully they
  compile out), and maybe exceeding maximum stack depth in context
  reduction.

 I didn't use the approach for anything as complex as all java.*
 classes. The only run-time costs are evaluating the chain of fst . snd
 . fst . 

I think I can use the pair types as phantom types on a reference type, so
my casts will hopefully be the identity function. (.) should be small
enough to inline, so GHC probably compiles id . id ... id to id. Correct?

 The length and the composition of the chain is statically
 known. Perhaps the compiler can do something smart here. The maximum
 length of the chain is the maximum depth of the inheritance tree. It
 shouldn't be too big. A cast from a subclass to a superclass has to be
 executed anyway (if not by your code then by JVM). If the maximum
 stack depth is exceeded, we can repeat the compilation with a compiler
 flag to allocate a bigger stack. In my experience the only time I've
 seen the derivation stack depth exceeded is when the derivation truly
 diverges.

Same for me, but I've never tried to model the java.* hierarchy either. I
think you get a cast (fst in your code) for each parent of each ancestor
along the inheritance path, which probably increses the count some.

Your code doesn't quite work. The instances you gave only allow you to
inherit from the rightmost parent. GHC's inference algorithm seems to pick
one rule for a goal and try just that. To find instances in the first
parent and in other parents it needs to try both. I think I'll just give
up on inheriting methods, and generate unrelated instances for each class
that needs one.

Brandon

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Modeling multiple inheritance

2003-09-26 Thread Brandon Michael Moore
On Thu, 25 Sep 2003 [EMAIL PROTECTED] wrote:

 Brandon Michael Moore wrote:

  So I defined a class to model the inheritance relationships

  class SubType super sub | sub - super where
upCast :: sub - super

  Now I can define a default instance of HasFooMethod:
  instance (HasFooMethod super args result,
SubClass super sub) =
   HasFooMethod sub args result where
foo sub args = foo (upCast sub) args

  This will propagate foo methods down the inheritance hierarcy. If a new
  class C is derived from A, I just need to say

  One problem is that the subclass relationship needs the functional
  dependency

  Does anyone know of clever solutions that would model multiple inheritance
  while preserving the functional dependencies (unsafe compiler flags are
  fine too), or ways to reduce the pain of overloading resolution without
  the functional dependency?

 Yes. The code included. The solution is trivial: in case of a multiple
 inheritance, a class has a _sequence_ of superclasses rather than a
 single superclass. Like

 instance SubClass (Object,()) ClassA
 instance SubClass (Object,()) ClassB

 -- Multiple inheritance (including the diamond!)
 instance SubClass (ClassA,(ClassB,())) ClassC
 instance SubClass (ClassA,(ClassB,(ClassC,( ClassD

 And we need some intelligence to traverse the sequence. But even a
 computer can do that.

That should solve my problem. Putting all the superclasses in a tuple
should work. I'm worried about large class hierarchies. If it works on the
java.* classes I should be fine. Have you used this approach before? I'm
worried about compile time, runtime costs from the casts (hopefully they
compile out), and maybe exceeding maximum stack depth in context
reduction. This is a clever solution. I like it. Now, is anyone up to
encoding the Dylan MRO in Haskell type classes? ;)

   I would like to propose a different solution: a dual of
 typeclasses in the value domain. Function foo is just a regular
 function

 foo:: Object - Int - Int
 foo x y = y

 We then need a class MApplicable fn args result with a method
 mapply. The trick is that the method should take any object of a type
 castable and cast it to the type of the first argument of fn. The cast
 can be made safe and statically checkable, using the type
 heap. Actually, we can use the type heap to model the dispatch table
 (whose rows are functions and columns are object/classes). Given a
 function and an object, we can search in many way for the applicable
 combination.

What type heap? It sounds like you are talking about information from an
OO runtime, or are you talking about the collection of instances. I tried
a system where method names were also represented by data types, but
without your solution for multiple inheritance I couldn't get the
implementation inheritance I wanted. How would you implement this dispatch
table? What are the advantages of this approach over the type class
encoding? I'm worried that generating bindings would be a problem if the
dispatch table needs to be a monolithic value with a very interesting type
in some file.

Brandon

 And now, the code for the solution that works.
 Compiler flags:
 -fglasgow-exts -fallow-overlapping-instances -fallow-undecidable-instances

 data Object = Object
 data ClassA = ClassA
 data ClassB = ClassB
 data ClassC = ClassC
 data ClassD = ClassD

 class SubClass super sub | sub - super where
   upCast :: sub - super

 instance SubClass (Object,()) ClassA
 instance SubClass (Object,()) ClassB
 -- Multiple inheritance (including the diamond!)
 instance SubClass (ClassA,(ClassB,())) ClassC
 instance SubClass (ClassA,(ClassB,(ClassC,( ClassD

 class HasFooMethod cls args result where
   foo ::  cls - args - result

 instance (SubClass supers sub,
   HasFooMethod supers args result)
  = HasFooMethod sub args result where
   foo obj args = foo (upCast obj) args

 instance (HasFooMethod cls args result) = HasFooMethod (cls,()) args result
   where
 foo (x,()) = foo x

 instance (HasFooMethod cls args result) = HasFooMethod (x,cls) args result
   where
 foo (x,y) = foo y

 instance HasFooMethod Object Int Int where
   foo _ x = x

 test1::Int = foo Object (1::Int)
 test2::Int = foo ClassA (2::Int)
 test3::Int = foo ClassD (3::Int)

 -- Likewise for another method:

 class HasBarMethod cls args result where
   bar ::  cls - args - result

 instance (SubClass supers sub,
   HasBarMethod supers args result)
  = HasBarMethod sub args result where
   bar obj args = bar (upCast obj) args

 instance (HasBarMethod cls args result) = HasBarMethod (cls,()) args result
   where
 bar (x,()) = bar x

 instance (HasBarMethod cls args result) = HasBarMethod (x,cls) args result
   where
 bar (x,y) = bar y

 instance HasBarMethod ClassB Bool Bool where
   bar _ x = x

 test4::Bool = bar ClassB True
 test5::Bool = bar ClassC True
 test6::Bool = bar ClassD True

Re: lexer puzzle

2003-09-25 Thread Brandon Michael Moore
Note I've replied to haskell-cafe. This post is a bit chatty and low on
solid answers.

On Thu, 25 Sep 2003, Sean L. Palmer wrote:

  A... should be split into A.. and .
  I found a compromise: let's make it a lexing error! :-)
  At least that agrees with what some Haskell compilers implement. No
  current Haskell compiler/interpreter agrees with what the report seems
  to say, that is that A... should be lexed as the two tokens A.. and
  ., and similarly, A.where should be lexed as A.wher followed by e.

 Hi.  I'm really new to Haskell, just learning it, and I must say I'm pretty
 overwhelmed by the large variety of constructs. (=, -, \ to name a few)

Would that be \ as in TREX row variable polymorphism? Just remember most
operators are just library functions. It's only =, -, =, -, :: that are
really part of the language, and {,},; for grouping. Did I miss any?


 But I'm just writing this to let you guys know (surely you know this
 already) that anyone from a C/C++/Java/Delphi background is going to
 completely misunderstand the meaning of A.anything in Haskell... it's
 completely nonintuitive to people with my background.  I kinda like dot
 notation because it ties together the symbols visually, for instance
 myrec.myfield is more of a unit than myrec myfield.  It stays together
 better when surrounded by other code, and would result in fewer parenthesis
 necessary.

A Python programmer would understand instantly: Python uses exactly the
same syntax for module access, though Python modules are usually in
lowercase. It also seems to be very much in the spirit of access a member
of this object of an OO language.

Or was that supposed to be composition of a constructor with a function, A
. f? Function composition, and higher order functions in general are
likely to confuse an imperative programmer, but I think there isn't much
syntax can do there.

Or are you talking about the field access syntax? Maybe the problem is
that dot has two to five different meanings, function composition, naming
module members, building hierarchial module names, being a decimal point,
and making elipses, and is commonly used for yet another purpose in OO
languages.

 Haskell to me seems to be a great language with a syntax problem, and a bad
 case of too many ways to do the same thing; thus every programmer does
 things their own way and it's difficult to grasp the language by looking at
 various programs, since they're all so very different.  As a small example,
 there's 'let' vs. 'where'.  Maybe a bit of pruning would be in order.

Do you mean the syntax is bad in places? Haskell is the cleanest language
I know of, but I'm sure it has some grungy bits. I've had problems with
unary minus (can't slice binary minus), and precedence of with irrefuatble
patterns and type ascription. I would be happy for any confusing syntax to
be improved. Any good ideas? Syntax change is a possibility: do notation
is a relatively recent addition, and arrow syntax is in the works.

I think you might instead mean the syntax cuts down our market share
because it isn't like common (C derived) languages. I don't think Haskell
could borrow any more syntax from C without actually making the language
worse. It's a problem, but not with the syntax. If someone is so solidly
into a C++/Java/OO mindset that the syntax would be a problem, the
semantics would probably be even more of a problem.

I would suggest Python if Haskell was too much of a jump for someone. It's
still OO, but it encourages more flexible and interesting programs, and
you don't have to live in a Java type system. Plus, it has more libraries,
bindings, and PR, so it's easier to get permission to use it in a company.

If someone is used to Python's layout rule and lack of type signatures,
and gets their head around some of the fun you can have dynamically
picking which members of an object to access, assigning to __dict__ and so
on, then Haskell should be much less of a jump in syntax, and less
imposing in semantics.

 That said, I still think it looks more promising than any other language
 I've looked at that actually is being actively used and maintained and has a
 decent installed base and good cross platform support.  So I will learn it.
 I just wish the transition was easier and that it took less time to learn.
 ;)

 Sean

I learned Haskell from the gentle introduction. It seemed gentle enough
to me but others disagree, so I'm probably not the best for advice for the
raw beginner. If you are interested in learning about monads though,
Jeff Newbern's monad tutorial seems accessible and as complete as anything
this side of Phil Wadler's paper.

I hope learning Haskell goes well.

Brandon

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Modeling multiple inheritance

2003-09-24 Thread Brandon Michael Moore
I'm trying to build a nicer interface over the one generated by
jvm-bridge. I'm using fancy type classes to remove the need to mangle
method names. I would like methods to be automatcially inherited,
following an inheritance hierarcy defined with another set of type
classes.

My basic classes look like this
class HasFooMethod cls args result | cls args - result where
  foo :: cls - args - result

If I have classes A and B with foo methods like
  foo_JA_Jint :: ClassA - Jint - Bool
  foo_JB_Jboolean :: ClassB - Bool - Jint
then I can make instances
  instance HasFooMethod ClassA Jint Bool
  instance HasFooMethod ClassB Bool Jint

Now I can just use foo everywhere. I would like to avoid declaring an
instance for every class though. In java methods are inherited from a
superclass, and I would like to inherit methods automatically as well. In
the bindings jvm-bridge generates a method is invoked with a function
mangled after the highest ancestor that defined that particular
overloading, so the implementation of HasFooMethod at a particular
overloading is the same for any descendant.

So I defined a class to model the inheritance relationships

class SubType super sub | sub - super where
  upCast :: sub - super

Now I can define a default instance of HasFooMethod:
instance (HasFooMethod super args result,
  SubClass super sub) =
 HasFooMethod sub args result where
  foo sub args = foo (upCast sub) args

This will propagate foo methods down the inheritance hierarcy. If a new
class C is derived from A, I just need to say

instance SubClass ClassA ClassC

and ClassC gets a foo method. (In the actually code I piggy-back on a
transitive subclass relation jvm-bridge defines that already includes an
upcast method, so upCast has a default that should always be acceptable).

The problem comes when interfaces are added to the mix. Interfaces are
treated just like classes by jvm-bridge, and even though no implementation
is inherited from instances in Java, the method accessors generated by
jvm-bridge should be inherited.

One problem is that the subclass relationship needs the functional
dependency so that the default instance of HasFooMethod will respects the
functional dependencies of HasFooMethod, so I can't declare subclass
instances for multiple inheritance. On the other hand, if I don't use the
functional dependency on HasFooMethod I end up needing to annotate most of
the return values in a program. I run into similar problems trying to use
numeric literals as arguments, because they are also overloaded.

Does anyone know of clever solutions that would model multiple inheritance
while preserving the functional dependencies (unsafe compiler flags are
fine too), or ways to reduce the pain of overloading resolution without
the functional dependency?

One alternative is generating seperate HasFooMethod instances for every
class in the system. The problem is that this would require alterating the
bit of jvm-bridge that uses JNI to find information on classes, which
currently only reports newly defined methods. JNI is black magic to me.

Thanks
Brandon

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Modeling multiple inheritance

2003-09-24 Thread Brandon Michael Moore
On Thu, 25 Sep 2003 [EMAIL PROTECTED] wrote:

 On 25/09/2003, at 7:22 AM, Brandon Michael Moore wrote:

  I'm trying to build a nicer interface over the one generated by
  jvm-bridge. I'm using fancy type classes to remove the need to mangle
  method names. I would like methods to be automatcially inherited,
  following an inheritance hierarcy defined with another set of type
  classes.
 ...

 Hi Brandon, it looks like the way that you're modelling inheritance and
 OO-style overloading is basically the same way that I did in my thesis:

  http://www.algorithm.com.au/mocha

 The actual implementation of the thesis will be up in CVS in ~24 hours,
 I'm just waiting from an email back from the people I'm getting it
 hosted with.

 If you want a quick run-down on how I did the OO-style overloading
 without delving into the paper, let me know and I'll post a quick
 summary.  I've only skimmed your email, but I think that the problem
 you're having with interfaces is solved with the way I'm modelling OO
 overloading and class inheritance.

Thanks. I think I could use the summary. I already found and skimmed your
thesis, and I don't think it gives me exactly what I want. All you do in
chapter 3 is represent a multiple inheritance hierarcy. I want default
instances that will propagate method definitions along the hierarcy. I'm
not sure that's possible though.

I want something like this:

data Object
data ClassA
data ClassB
data ClassC

class SubClass super sub ???

instance SubClass Object ClassA
instance SubClass Object ClassB
instance SubClass ClassA ClassC
instance SubClass ClassB ClassC

class HasFooMethod cls args result  ??
  foo :: cls - args - result
instance SubClass super sub, HasFooMethod super args result ,???
 = HasFooMethod sub args result where
  foo obj args = foo (upCast obj) args

instance HasFooMethod Object int int where
  foo = id
(now all four classes have a foo method)

Brandon

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Polymorphic Recursion / Rank-2 Confusion

2003-09-21 Thread Brandon Michael Moore

On Sun, 21 Sep 2003, Dominic Steinitz wrote:


 Brandon,

 I get the error below without the type signature. My confusion was thinking
 I needed rank-2 types. In fact I only need polymorphic recursion. Ross
 Paterson's suggestion fixes the problem. I stole Even and Odd from Chris
 Okasaki's paper on square matrices. They relate to fast exponentation
 algorithm. There's something to be said for Zeror and One; as you say they
 give the structure in binary.

I would guess you knew you needed a forall, but missed the implicit one
out front. I'm glad you got this working. I'm surprised this didn't
typecheck though. I usually put signatures on top level functions, so I
suppose my intuition is more tuned to types that can be checked rather
than types that can be inferred. Can anyone tell me what's wrong with the
following derivation?

join :: (a - [c]) - (b - [c]) - ((a,b) - [c])
join f g (x,y) = f x ++ g y

collect_ colv colw (Zero v) = colv v
collect_ colv colw (Odd v) = collect_ colv (join colw colw) v
collect_ colv colv (Even v) = collect_ (join colv colw) (join colw colw) v

for the first equation, name the type of collect_
collect_ :: t
name the arguments and unify collect_ with the argument types
colv :: a
colw :: b
(Zero v) :: Vector v w
t = a - b - Vector v w - d
The type of v follows from the data type definition
v :: v
The body is an application (colv v), so we get
a = c - d, c = v

so far we have
collect :: (v - d) - b - Vector v w - d
which uses up all the constraints.

In the next equation v has type
v :: Vector v (w,w)
b = (e - [f]) (from the type of join)
for the recursive call to collect, we get constraints like
d[(w,w)/w] = d
(e-[f])[(w,w)/w] = ((e,e) - [f])
We can deduce that w is not in the free variables of d or f,
and that e = w.

Now we have the type
collect :: (v - d) - (w - [f]) - Vector v w - d

In the last equation the use of join lets us deduce
that d = [f], giving a final type
collect :: (v - [a]) - (w - [a]) - Vector v w - [a].

What did I do wrong here? Probably the constraints between unification
varaibles. Is there a problem with that sort of reasoning? I think I'm
probably expecting some sort of implicit quantification that I haven't
really specified.

 My motivation in using this type was to see if, for example, I could
 restrict addition of a vector to another vector to vectors of the same
 length. This would be helpful in the crypto library where I end up having to
 either define new length Words all the time or using lists and losing the
 capability of ensuring I am manipulating lists of the same length.

I've vaugely heard about something called Cryptol that Galois connections
wrote, that compiles to Haskell. I don't know about the licensing status
though.

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Polymorphic Recursion / Rank-2 Confusion

2003-09-20 Thread Brandon Michael Moore
On Sat, 20 Sep 2003, Ross Paterson wrote:

 On Sat, Sep 20, 2003 at 12:01:32PM +0100, Dominic Steinitz wrote:
  Can anyone tell me why the following doesn't work (and what I have to do to
  fix it)? I thought by specifying the type of coalw as rank-2 would allow it
  to be used both at a and (a,b).

 Change the signature to

   coal_ :: (v - [a]) - (w - [a]) - Vector_ v w - [a]

 Then you can define

   type Vector = Vector_ ()

   coal :: Vector a - [a]
   coal = coal_ (const []) (:[])
 ___
 Haskell mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Polymorphic Recursion / Rank-2 Confusion

2003-09-20 Thread Brandon Michael Moore
Sorry about the empty message. Send /= Cancel

 Can anyone tell me why the following doesn't work (and what I have to do to
 fix it)? I thought by specifying the type of coalw as rank-2 would allow it
 to be used both at a and (a,b).

Frank explained why the type you gave wouldn't work. I would like to add
that your function would type check without the type signature. This
suggests something here is actively confusing. Do you have any idea what
caused this problem?

I hope to help teach Haskell to first year students, so I'm trying to
figure out what parts of the language are hard to get, and how to usefull
explain things. Anything in pure H98 that trips up an experienced
programmer is worth some attention.

Completely unrelated, I think Zero and One would be better names than Even
and Odd because then the string of constructors writes the size of the
vector in binary, LSB first. I can't see any mnenomic value to Even and
Odd. How do you interpret Even and Odd?

Thanks

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: How to detect finite/infinite lists?

2003-09-18 Thread Brandon Michael Moore
On Thu, 18 Sep 2003, Juanma Barranquero wrote:

 Extremely-newbie questions:

 Is there any way to know if a list is finite or infinite, other than
 doing:

   length l

 and waiting forever? :)

 I ask because I was learning Haskell by writing some pretty naive
 implementation of surreal numbers, where I used lists for left and right
 surreal sets, and I wanted to do some operations on the sets (like
 finding the maximum/minimum), but obviously just on finite sets.

 I vaguely suspect the answer is going to be: No, because lists are lazy
 (at least when they are :) and there's no general way to know beforehand
 how many elements they're going to have. But still, if I write

  x = [1..5]

 the compiler knows pretty well x is not going to grow any new member...

Well, it's easy to tell that a list is finite by running length and having
it terminate. This is obviously equivalent to the halting problem so you
can't really expect anything better in general. Why do you need to test
whether lists are infinite? If your lists are being generated from finite
descriptions maybe you could use a data structure that records the
descriptions.

For example, rather than defining
l = [1..]++[2,3..7]
you could define
data EnumList = EnumFromBy Integer Integer
  | EnumFromToBy Int Int Int
  | Append EnumList EnumList
l = EnumFromBy 1 1 `Append` EnumFromToBy 2 1 7

then to test whether a list is infinite you can write
infinite (EnumFromBy _ _) = True
infinite (EnumFromToBy a delta b) = compare a delta == compare a b
infinite (Append l1 l2) = infinite l1  infinite l2

Of course this only works if it is computable whether a description gives
a finite list.


 (Unrelated) Is there any standard function to do:

  interleave [] _  = []
  interleave _  [] = []
  interleave (x:xs) (y:ys) = x : y : interleave xs ys

 It's pretty easy to implement as shown or via zipWith, but given that
 Haskell 98 already includes some basic functions (like cycle, iterate,
 etc.) I wonder if I've missed this one somehow.

 Thanks,

Not as far as I know. The standard List module doesn't define anything
like that and Data.List doesn't define anything like that. What do you
want to use it for? If you are looking for strict alternation you should
use this defintion. If you want a nondeterministic merge as elements are
computed, look at mergeIO in Control.Concurrent in the GHC libraries.

Brandon

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Syntax extensions: mdo and do...rec

2003-09-17 Thread Brandon Michael Moore
On Thu, 18 Sep 2003, Ross Paterson wrote:

 The arguments being made here can all be found in the recursive monad
 bindings papers and Levent's thesis.

I don't remember anything about finding smaller binding groups in the
mdo paper. I don't think I've read Levent's thesis.

 On Wed, Sep 17, 2003 at 11:41:24AM -0700, Brandon Michael Moore wrote:
  In any case, I don't see the need for explicit rec groups. Can't GHC just
  find the strongly connected components like it already does with let
  bindings?

 That's what GHC and Hugs do now for mdo (actually segments rather than
 components, because actions can't be rearranged).

  Don't the laws for loop and mfix justify the transformation?

 The loop axioms do, but Levent didn't assume right tightening, which
 corresponds to moving bindings down from a rec, because monads like
 exceptions don't satisfy it.  The same would go for a loop defined on
 an exception arrow.  And that's the biggest problem with implicit
 segmentation: you need to understand what it does to work out the
 meaning of your program.  Again there's an example in those papers
 and Levent's thesis.

I expected any problems would be like that. I remember hearing about a
fixpoint operator for the continuation monad that satisfied all the laws
but right tightening. Well, this would fall under If it really turns out
to be frequently necessary.

 BTW, in GHC 6.2 with the -fglasgow-exts -farrows flags, you will be able
 to use either mdo or do...rec for monads and for arrows, as an experiment.
 (Maybe rec wasn't such a great keyword to take from the identifier
 space.)

When can we expect 6.2?

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Syntax Extensions (and future Haskell)

2003-09-17 Thread Brandon Michael Moore

On Thu, 18 Sep 2003 [EMAIL PROTECTED] wrote:

 Although a number of comments in this discussion make some sense,
 I personally am getting worried about the direction that it is taking.
 I have been a (fairly quiet) Haskell user for some time. I like it
 because of the strong connection to standard mathematical constructs,
 and the dedication to equational reasoning. I was dubious when Monads
 were introduced, but grew to be comfortable with that as an approach
 to embedding temporal characteristics, I try not to use do, I like the
 fact that I don't have to. But, recent developments on this list suggest
 that this is all going to be a thing of the past. There is a real danger
 that Haskell will just turn into yet another procedural language.

What worries you? I don't see what configuration has to do with procedural
languages. The elegant solution everyone is looking for would make it
easier for the language to evolve, but I don't see why it would become
more procedural. If anything most of the current extensions move Haskell
even farther from normal procedural languages. I'm curious what procedural
tendencies you see. I agree monads are becoming very popular, but I don't
equate monads and imperative languages. Maybe IO and ST are imperative.

 Firstly I want to bring into the open something that you would all be
 aware of. It is possible to change any typical procedural imperative
 language into a functional language by a change of point of view.
 All you do is just state that each command in the imperative is a
 function that operates on the state of the machine. The entire program
 becomes a composition of these functions. Control statements such as
 for-do-while-until-repeat-if, simply become higher order functions.
 That's why the monad stuff works, all it does is demand that you
 explicitly admit to the passing of the state.

 If you keep going the way you are going you will simply turn
 Haskell into a procedural language with some snazzy data-types.
 It would survive, perhaps, but in name only.

 The importance of Haskell is not that it is Functional, this is
 in reality just a technicality of point of view, the importance
 of Haskell is the WAY in which it is functional, the emphasis
 it puts on the manner in which the total function is decomposed.
 It is important not simply that Haskell admits equational reasoning,
 but that it is equational reasoning that a human can do, not something
 that requires a heavy duty code transformer to work out.

 At least this is how I see it, perhaps this is simply the
 point at which Haskell and I part company?

 I also don't see the point of the language configuration pragmas
 either. Uniformity is important. Instead of agreeing to disagree,
 and coming up with a rag-bag language, the points need to be nutted
 out until they make consistent sense. The pragmas do not represent
 any uniformity, they actually represent a schism, you can't agree
 so you are splitting the language into incompatible variants. It
 solve the political problem of in-fighting, but only by letting
 the in-fighting destroy the language. It's the same sort of thing
 that splits C++ into multiple camps, the same thing that gives the
 multiple levels of comments in comments for postscript, and so on.

I agree with Henrik here. I see two main purposes to configuration,
backwards compatibility for continuting support of the standard and of
legacy code, and the freedom to experiment with the language without
committing to extensions. I see multiparameter type classes and explicit
quantification as part of the current langauge, and expect others feel the
same way. I expect implicit parameters to remain experimental until we
really understand them, for example.

 Remember Flon's law. The fact that you CAN do something does not
 mean that it is a good idea.

 I'll get off my soap box now.

If it gives you a useful perspective, and gets your insights back to the
rest of us, stay up there for a while. We can always use good ideas.

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Question about implementation of an information-passnig arrow

2003-09-16 Thread Brandon Michael Moore


On Mon, 15 Sep 2003, Yu Di wrote:

 data MyArrow a b = MyArrow ((String, a) - (String,
 b))

 i.e. there is an information asscioated with each
 piece of data (represented by the string), and I want
 to pass it around. And often the arrow's processing
 logic will depend on the input information, therefore
 a monad-style

 data MyArrow a b = MyArrow (a - (String, b))

 will not work.

 Now I have a problem with the definition of pure and
 first. At first, I declared

 pure f = MyArrow (\(s, x) - (s, f x))
 first (MyArrow f) = MyArrow (\(s, (x, y)) - let (s',
 z) = f (s, x) in (s', (z, y)))

 this seems to work, but then I begin to have problems
 with the data-plumbing pure arrows, e.g. in

 pure (\x - (x, x))  first someArrow  pure
 (\(_, x) - x)

 Ideally, this arrow will preserve whatever information
 I put there for the input, but because first
 someArrow will change the WHOLE information
 associated with the pair of result, I can't find any
 way to let pure (\(_, x)-x) (which is an extremely
 generic function) retrieve the PART of information for
 the second piece in the pair tuple.

 I thought about this a lot, but it seems to me that
 the only way to solve this is to somehow make the
 information lookupable from the data itself, not
 placed beside the data, but how I can do that? And
 can there be some other solution to this?

 Thanks very much!

 Di, Yu

What are you trying to do here? From the type of the arrow you are trying
to define it looks like pure functions on (String,a) pairs should work
just as well. Whatever you are doing, I would guess that the tagging is
fairly orthogonal to the use of arrows. I think you can probably get by
with pure functions on pairs of strings and values, with a few lifting
combinators.

For example, if I assume that the tags are used so functions can add
comments on the values as they pass through you might write something like

type Annotated a = (String,a)
comment :: String - Annotated a - Annotated a
comment newcomment (comments,val) = (newcomment++comments,val)

liftAnn2 f (c1,a) (c2,b) = (c1++c2,b)
plusAnn x y = comment Added two numbers

Now you can define functions like
addThree x y z = plusAnn x (plusAnn y z)
and run compuatations like
addThree (One,1) (Two,2) (Three,3)
and get results like
(Added two numbersOneAdded two numberTwoThree,6)

Obviously the policy for combining tags is pretty bad, but you could fill
in whatever you wanted.

I must say I'm pretty dubious though. It sounds like you intend to compute
over these tagged values and combine them. The only sensible use of string
tags on values passed freely around that I can think of is building up a
representation of the computation that produced the value, like the R.hs
module by Claus Reinke. To do that one tag per arrow is exactly what you
want. For most other uses I think the tag should probably be a data
structure rather than a string. If the values are sitting in a data
structure (like an association like) it's a different story, of course.
I could help more if I had a better idea what your purpose is.

Brandon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: Circular Instance Declarations

2003-09-14 Thread Brandon Michael Moore


On Thu, 11 Sep 2003, Simon Peyton-Jones wrote:

 OK, I yield!

 The HEAD now runs this program.  It turned out to be a case of
 interchanging two lines of code, which is the kind of fix I like.

 Simon

Cool! Yet another domain where haskell handles infinities quite happily.
Thanks.

Hopefully I'll have some code to contribute soon.

  Brandon



 | -Original Message-
 | From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
 Behalf Of Ashley Yakeley
 | Sent: 07 September 2003 06:57
 | To: [EMAIL PROTECTED]
 | Subject: Circular Instance Declarations
 |
 | When -fallow-undecidable-instances is switched on, is there any reason
 | why circular instances are forbidden? For instance:
 |
 |  module CircularInsts where
 | {
 | data D r = ZeroD | SuccD (r (D r));
 |
 | instance (Eq (r (D r))) = Eq (D r) where
 | {
 | ZeroD == ZeroD = True;
 | (SuccD a) == (SuccD b) = a == b;
 | _ == _ = False;
 | };
 |
 | newtype C a = MkC a deriving Eq;
 |
 | equalDC :: D C - D C - Bool;
 | equalDC = (==);
 | }
 |
 | When I compile this, I get this:
 |
 |  $ ghc -fglasgow-exts -fallow-undecidable-instances -c
 CircularInsts.hs
 |  CircularInsts.hs:2:
 | Context reduction stack overflow; size = 21
 | Use -fcontext-stack20 to increase stack size to (e.g.) 20
 | `Eq (C (D C))' arising from use of `==' at CircularInsts.hs:16
 | `Eq (D C)' arising from use of `==' at CircularInsts.hs:16
 | `Eq (C (D C))' arising from use of `==' at CircularInsts.hs:16
 | `Eq (D C)' arising from use of `==' at CircularInsts.hs:16
 |
 | Would it be reasonable for the compiler to check back through the
 stack
 | and allow the circularity? It will just create an ordinary recursive
 | function.
 |
 | --
 | Ashley Yakeley, Seattle WA
 |
 | ___
 | Haskell mailing list
 | [EMAIL PROTECTED]
 | http://www.haskell.org/mailman/listinfo/haskell


 ___
 Haskell mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell




___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: An IO Question from a Newbie

2003-09-14 Thread Brandon Michael Moore


On Sun, 14 Sep 2003, Glynn Clements wrote:


 Brandon Michael Moore wrote:

  Hal was pretty terse, so I'll explain why switching to putStrLn will help.
 
  stdout is line buffered.
 
  At least by default (see hSetBuffering). That means output will only be
  flushed to the screen once a newline is written. Your prompt wasn't
  printed because it didn't have a newline, so it was buffered until the
  second print provided one (read from the user, by way of s).
 
  This is hardly specific to Haskell. Try this C program:

 But there's one significant difference between C and Haskell, which is
 applicable in the case of Matt's program. In C, any line-buffered
 output streams are automatically flushed when a read from an
 unbuffered or line-buffered stream can't be satisfied from its buffer.

Interesting. I didn't know this. Maybe we should match this behaviour, or
provide a write-string-and-flush function. It seems like this issue
is causing an undue amound of trouble.

 Also, it seemed fairly clear from Matt's original message that:

 a) he didn't want to have to force a new-line (he noted that doing so
 eliminated the problem), and

I should note here that there is a gnu readline binding distributed with
GHC. It's undocumented, but it seems to follow the C API pretty closely,
and you can make a decent interface using only two of the functions.

 b) he understood the concept of flushing, but presumably didn't know
 how to do it in Haskell.

 While we're on the subject, I'll point out a couple of other
 differences between the buffering in ANSI C's stdio library and
 Haskell's:

 1. In Haskell, you can change the buffering mode at any point; in C,
 you have to change it before any data is read from or written to the
 stream, otherwise the behaviour is undefined.

 2. For an input stream which is associated with a tty, changing the
 buffering mode may also change the terminal settings (setting it to
 unbuffered disables canonical mode while setting it to line-buffered
 or fully-buffered enables it).

 --
 Glynn Clements [EMAIL PROTECTED]
 ___
 Haskell-Cafe mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Circular Instance Declarations

2003-09-10 Thread Brandon Michael Moore

On Sun, 7 Sep 2003, Ashley Yakeley wrote:

 In article [EMAIL PROTECTED],
  Brandon Michael Moore [EMAIL PROTECTED] wrote:

  Detecting circularity in a derivation is equivalent to accepting a regular
  infinite derivation for instances. Would you have a use for irregular
  derivations?

 Could you give me an example?

I should have asked whether you needed irregular types, and undecidable
instances for irregular types.

I'm close to a proof that will justify more permissive instances for
regular types (plus a bit), but I haven't made much progress on irregular
types. I'm wondering if anyone actually uses them, let alone
fancy instances for them. Also, if I tried to expand my approach to
irregular types it would require generating dictionaries a runtime, rather
than just defining dictionaries recursively.

In case the word irregular is the problem I'll give my definition, and how
I'm applying it to types. The definition is from Pierce, in Types and
Programming Languages.

An irregular tree is a tree with an infinite number of distinct subtrees.

When I say a type is irregular I mean the infinite trees you get when you
(recursively) expand all the applications of type constructors is
irregular.

A simple irregular type is
Irr a = Con a (Irr (F a))
(as long as F uses a)

This expands to something like a|F a|F (F a)| ..., where t|t..t
denotes a sum type. Each right child is like the parent with an extra F
everywhere, so the tree is irregular.

The sort of instance I'm interested in is something like
instance (Eq a,Eq (Irr (F a)) = Eq (Irr a)
where the context only mentions (subexpressions of) type expressions
encoutered while expanding the type.

Are you using anything like this?

Brandon


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Type Class Problem

2003-09-10 Thread Brandon Michael Moore
Hello everyone

I think I'm close to useful results on the instance restrictions.

First there's an obvious extension to the Haskell98 rule. The H98 rule
says the instance head must be a type constructor applied to type
variables, and the context must mention only those type variables. This
gives a termination proof by counting constructors. If the rule is
weakened to allow an arbitrary type expression in the head and require
that the context mention only strict subexpressions of the head, the same
proof still applies. I'm not sure how useful this is, but we might as well
allow it.

Second, I have half a result in the direction of allowing the context to
mention types that arise from applying type constructors used in the
instance head.

This requires accepting regular derivations, which means the compiler
would need to track all previous goals while deriving an instance, and
handle a second occurance of a goal by producing a reference to the
dictionary for the first occurance (which may not be constructed yet),
rather than blindly continuing the derivation.

First I will explain the proof method. It's related to structural
induction, but not quite the same. Suppose we have a subexpression
relation on type expressions such that every type expression has only a
finite number of subexpressions. If instance contexts only mention
subexpressions of the head, then searching for an instance for a type can
only generate #of subexpressions*#of classes distinct goals.  Therefore,
in finite time either the derivation will fail, or we will product a
regular derivaiton. Alternately, we only try to derivive an instance the
first time it arises as a goal, so each time we apply an instance rule
there is one less goal in the pool of possible goals, which must
eventually be exhausted.

The syntactic subexpression relation obviously has these properties, but
it's often useful to refer to types that show up when we apply type
constructors.

For example, my case and a simplification of Ashley's:
data Mu f = In (f (Mu f))
instance C (f (Mu f)) = C (Mu f)

On the other hand, we can't unfold all type constructors because some
types are irregular, or, we encounter an infinite number of types while
expanding the type constructors.

Define a kind indexed family of predicates on type constructors, R_K(T),
where the property is true if T::K, T is regular (including expanding the
insides of lambdas), and if K=K1-K2, then R_K2(T t) for all t such that
R_K1(t). Say a type is regularity preserving if it satisfies the predicate
corresponding to its kind. Any type expression build entirely from
regularity preserving type constructors will be regular. I think that a
subexpression relation that allows expanding applications
regularity preserving type constructors will give any type a finite number
of subexpressions, but I don't know enough about the structure of
regularity preserving type constructors to prove it.

The missing half here is an algorithm for testing whether a type
constructor is regularity preserving. For this the body of the type
constructor can be simplified to consist of just the type constructor
applications in the body. Apply the type constructor to skolem arguments,
and check whether the resulting tree is regular. I don't know how to do
this.

Another approach is to draw out a dependency graph between type
constructors, with an edge from A to B for each use of B in the definition
of A, labeled with the arguments used. Then the question is whether
starting from out type applied to tyvars we can find some path through the
graph that generates an infinite number of types, where we keep track of
the current node and the current arguments, and modify the arguments as
directed by a label when moving along an edge. I don't know if the search
can even a tail repeating path that witnesses the irregularity, let alone
a family of paths that can be tested.

Any assistance here would be appreciated.

Thanks
Brandon


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Request for Instances

2003-09-10 Thread Brandon Michael Moore
Hi everyone.

 I've been looking at the restrictions on instances in the H98 standard
and thinking about alternatives. I would like to have a body of data type
and class/instance declarations so I can test how useful various
extensions would be. Please send or direct me to code that requires
-fallow-undecidabe-instances.

Thanks.
  Brandon


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Circular Instance Declarations

2003-09-10 Thread Brandon Michael Moore
On Wed, 10 Sep 2003, Ashley Yakeley wrote:
  Brandon Michael Moore [EMAIL PROTECTED] wrote:

  A simple irregular type is
  Irr a = Con a (Irr (F a))
  (as long as F uses a)

 Would this be an irregular type, with F as ((-) val)?

   data SymbolExpression sym val a = ClosedSymbolExpression a |
OpenSymbolExpression sym (SymbolExpression sym val (val - a));

This would be an irregular type. In my proposal an instance declaration
deriving some instance of SymbolExpression sym val a could use the types
sym val and a in the context, but not (val - a) which would only arise
from unfolding the type constructor. Of course when I say proposal I
mean Would be a proposal if only I could prove that last lemma.

 I used to use this in HScheme for expressions with free variables, as in
 the lambda calculus. For instance, \x.xy has y as a free variable,
 and might be represented as something like this:

   OpenSymbolExpression y (ClosedSymbolExpression (\y - (\x - x y)))

 It's very clean and safe, and can be made an instance of
 FunctorApplyReturn, but it turned out to be a bit slow. I also tried
 this:

   data ListSymbolExpression sym val a =
  MkListSymbolExpression [sym] ([val] - a);

   MkListSymbolExpression [y] (\[y] - (\x - x y))

 This is much simpler, but now one has to make sure that the lists are
 the same size, so to speak. But this one turned out to be the fastest:

   newtype FuncSymbolExpression sym val a =
MkFuncSymbolExpression ((sym - val) - a);

   MkFuncSymbolExpression (\f - (\x - x (f y)))

 The downside is that there's no way to find out what the free variables
 are. That's OK for Scheme, however, since Scheme doesn't complain about
 unbound variables until run-time.

 So, um, any excuse to talk about HScheme anyway.

It looks like your scheme puts the type system to good use. I used a value
type with numbers, Val-Val functions, and some other stuff. I gave up
when I realized I needed to thread references through everything to
implement R5RS. I suppose everyone has started a Scheme in Haskell
sometime.

Brandon


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Circular Instance Declarations

2003-09-07 Thread Brandon Michael Moore
Hi Ashley

  See the thread Type Class Problem. In his post on Aug 22 Simon
Peyton-Jones said that it shouldn't be hard to implement, and mentioned
that it would ruin the property that dictionaries can be evaluated by
call-by-value. I couldn't puzzle out enough of the type class system to
make the change on my first try, and since then I've been looking for a
more general solution

Actually, I'm surprised someone else has a use for this. I wanted
circular instances for playing with the paper Recursion Schemes from
Comonads. What are you trying to do?

Detecting circularity in a derivation is equivalent to accepting a regular
infinite derivation for instances. Would you have a use for irregular
derivations?

Brandon

On Sat, 6 Sep 2003, Ashley Yakeley wrote:

 When -fallow-undecidable-instances is switched on, is there any reason
 why circular instances are forbidden? For instance:

  module CircularInsts where
 {
 data D r = ZeroD | SuccD (r (D r));

 instance (Eq (r (D r))) = Eq (D r) where
 {
 ZeroD == ZeroD = True;
 (SuccD a) == (SuccD b) = a == b;
 _ == _ = False;
 };

 newtype C a = MkC a deriving Eq;

 equalDC :: D C - D C - Bool;
 equalDC = (==);
 }

 When I compile this, I get this:

  $ ghc -fglasgow-exts -fallow-undecidable-instances -c CircularInsts.hs
  CircularInsts.hs:2:
 Context reduction stack overflow; size = 21
 Use -fcontext-stack20 to increase stack size to (e.g.) 20
 `Eq (C (D C))' arising from use of `==' at CircularInsts.hs:16
 `Eq (D C)' arising from use of `==' at CircularInsts.hs:16
 `Eq (C (D C))' arising from use of `==' at CircularInsts.hs:16
 `Eq (D C)' arising from use of `==' at CircularInsts.hs:16

 Would it be reasonable for the compiler to check back through the stack
 and allow the circularity? It will just create an ordinary recursive
 function.

 --
 Ashley Yakeley, Seattle WA

 ___
 Haskell mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell







___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: Type class problem

2003-08-30 Thread Brandon Michael Moore
On 28 Aug 2003, Carl Witty wrote:

 On Thu, 2003-08-28 at 13:10, Brandon Michael Moore wrote:
  Unfortunately I don't have a useful syntatic condition on instance
  declarations that insures termination of typechecking. If types are
  restriced to products, sums, and explicit recursion, then termination is
  ensured if we restrict the assumtions of a declaration to instances for
  subexpressions of the type we are declaring an instance for (there are
  only a finite number of subexpressions times a finite number of classes,
  and one is added each time we apply a rule). I haven't made any progress
  if type operators are allowed, and I don't have any simple check whether a
  Haskell type expression can be represented without type operators. I
  was hoping to get normalization of type expressions from the simple
  kinding, but recursive operator definitions break that.
Rather, regularity of the resulting types, or something like that.
We can always evaluate a type expression to head normal form, but the
complete expansion of a type can be irregular.

 I think some of David McAllester's papers from about 1990-1994 may be
 relevant here.  He has several papers on deciding when sets of inference
 rules are terminating, or terminating in polynomial time.  (He applies
 this in the context of automated theorem proving, but it should apply
 perfectly well to type class inference as well.)


Thanks, this is interesting work. I've read New Results on Local
Inference Relations, and skimmed a few other papers. Too bad I can't see
how to use it. Determining locality seemed to require a global analysis,
and superficial rules look too restrictive for instance declarations. Some
of the ideas could probably be adapted to prove termination (and bounds)
for sets of rules if the anteceedents of rules mention only subterms of
the conclusion. It's pretty trivial to prove that regular terms have
regular derivations if any, but I haven't looked for good bounds.

It looked to me like most of the results assumed that terms were finite,
but most of it should carry over to regular terms. I don't think it would
be easy to extend to irregular terms, even if I had a good
characterication of Haskell types. Does anyone know of any results in that
direction? Simple kinds give you head normalization, but I don't know how
to describe the sorts of terms that end up as constructor arguments as you
evaluate type expressions. I want some reasonable characterization of the
sort of trees you get when you evaluate type expressions completely. Does
anyone know of papers or books on this?

Thanks
Brandon


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: Type class problem

2003-08-28 Thread Brandon Michael Moore

On Fri, 22 Aug 2003, Simon Peyton-Jones wrote:


 Brandon writes

 | An application of Mu should be showable if the functor maps showable
 types
 | to showable types, so the most natural way to define the instance
 seemed
 | to be
 |
 | instance (Show a = Show (f a)) = Show (Mu f) where
 |   show (In x) = show x
 |
 | Of course that constraint didn't work

 Interesting.  This point is coming up more often!  You'll find another
 example of the usefulness of constraints like the one in your instance
 decl in Derivable Type Classes (towards the end).
 http://research.microsoft.com/~simonpj/Papers/derive.htm

 Valery Trifonov has a beautiful paper at the upcoming Haskell workshop
 2003 that shows how to code around the lack of universally quantified
 constraints.  I strongly suggest you take a look at it, but it doesn't
 seem to be online yet.


 | Constraint Stack Overflow:
 |   Observable (N (Mu N))
 |   Observable (Mu N)
 |   Observable (N (Mu N))
 |   Observable (Mu N)
 |   ...
 |
 | I expected that that constraint solver would realize it could
 construct a
 | dictionary transformer with a type like Observer Nat - Observer Nat
 and
 | take the fixed point to get an instance for Nat. I haven't looked at
 the
 | implementation, but it seems like it would be easy to add the
 constraint
 | we are trying to derive to the list of assumptions when trying to
 | construct all the anteceedents of an instance declaration.

 That's true, I believe, but
 a) it's a bit fragile (a sort of half-way house to solving the halting
 problem)
 b) at the moment dictionaries have the property that you can always
   evaluate them using call-by-value; if they could be recursively
   defined (as you suggest) that would no longer be the case

 Mind you, GHC doesn't currently take advantage of (b), so maybe it
 should be ignored.  Adding the current goal as an axiom would not be
 difficult, but I don't have time to do it today!  Is anyone else
 interested in such a feature?

I would like to try making this change, but I couldn't puzzle out enough
of the type class system the last time I looked. I would appreciate
advice, references, or even just a list of the relevant modules.

With regard to a), taking our goal as an axiom while searching for a
derivation can be expressed in language that sounds less ad-hoc: accept
regular instance declarations.

Unfortunately I don't have a useful syntatic condition on instance
declarations that insures termination of typechecking. If types are
restriced to products, sums, and explicit recursion, then termination is
ensured if we restrict the assumtions of a declaration to instances for
subexpressions of the type we are declaring an instance for (there are
only a finite number of subexpressions times a finite number of classes,
and one is added each time we apply a rule). I haven't made any progress
if type operators are allowed, and I don't have any simple check whether a
Haskell type expression can be represented without type operators. I
was hoping to get normalization of type expressions from the simple
kinding, but recursive operator definitions break that.

On the other hand, allowing implications in a context is more general.
Adding the conclusion of a rule as an axiom while trying to derive the
context is equivalent to modifying every declaration
instance (ct1,ct2,ct3) = goal
to read
instance (goal=ct1,goal=ct2,goal=ct3) = goal.
The methods defined in the instance should still typecheck, if we use the
context and the conclusion of the instance declaration when checking the
method definitions.

It's worth noting that if we have a restriction on the form of instance
declarations that ensures decidability of type checking, then generalizing
the form of instance declarations from
(conclusion, .., conclusion) = instance conclusion
to
(ctx = conclusion, .. , ctx = conclusion) = instance conclusion
doesn't break decidability, as long as
1) the instance would still be syntactically valid if we replaced all the
implications in the context with their right hand side
2) all the implications in the context also satisfy the syntatctic
validity rule.
Unfortunately the only restriction I know of (the one from the Haskell
Report) isn't very useful even with generalized contexts. On the other
hand, allowing regular derivation widens the space of safe instances, but
doesn't give any guidance toward restrictions that ensure safety.

Allowing implications in contexts even allows us to derive instances for
some irregular types:

data Twice f x = T (f (f x))
data Growing f = G (f (Growing (Twice f)))
data Id x = Id x

Suppose we want to define instances that will imply Show (Growing Id).
Growing Id is an irregular type so allowing irregular derivations isn't
enough, but the following instances are acceptable

instance (forall a.Show a = Show f a,Show x) = Show (Twice f x) where
  show (T ffx) = show T ++show ffx
instance (forall a.Show a = Show f a) = Show (Growing f) where
  show (G fgtf) = show G 

Re: Debugging

2003-08-28 Thread Brandon Michael Moore
On Tue, 26 Aug 2003, Konrad Hinsen wrote:

 My Haskell experiments have reached a size in which debugging tools would be
 more than welcome, so I looked around, and was very disappointed. I tried
 Hood, which is a pain to use (lots of editing of the code required), I looked
 at Buddha but didn't want to downgrade to GHC 5 for trying it (nor is my code
 Haskell 98, because of multi-parameter classes), and all that seems left to
 try is Hat, whose Web site I can't reach at the moment.

 So what are you Haskell programmers using for debugging in real life?

 Konrad.

What are you trying to debug? I could write something that sounded more
relevant if I knew.

I haven't been doing anything that counts as real life programming, so my
suggestions are probably biased towards trivial programs and low standards
of correctness. In particular, I've never worried about space leaks. My
advice is basically design suggestions and QuickCheck.

Mostly I try to write my programs in small pieces and check those. It's
not exactly debugging, but it makes debugging a lot easier. For trivial
throwaway programs testing by hand in GHCI is enough (It's been enough for
solving some old ACM problems. I love ReadP, and the monad transformers).
QuickCheck gives a much better indication of correctness with much less
manual labor. IIRC I used it when writing a unifier.

I don't know how to test a GUI with QuickCheck, or custom monads (Browser,
JVM, etc. custom=not in a typeclass, so you can't define a stub), but
other than that I think it's the best thing short of a proof, and it has
the advantage of being machine checked.  Has anyone tried to write a
system that would prove QuickCheck style properties from a function
definition?

The only tool I've used is HOOD. The version online needs a bit of hacking
to work with GHC 6 (you need to resolve catch to Control.Exception.catch
and fix some types). I like the output, and I didn't think the code
changes were too bad. I had some trouble defining Observable instances
when I was coding examples from Recursion Schemes from Comonads, but I
doubt your types are quite as convoluted (think DecoratingStructures at
the type level). Do you want to avoid ANY code changes?

Brandon


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Type class problem

2003-08-17 Thread Brandon Michael Moore


On Sun, 17 Aug 2003 [EMAIL PROTECTED] wrote:


  I defined type recursion and naturals as

  newtype Mu f = In {unIn :: f (Mu f)}
  data N f = S f | Z
  type Nat = Mu N

  An application of Mu should be showable if the functor maps showable types
  to showable types, so the most natural way to define the instance seemed
  to be

  instance (Show a = Show (f a)) = Show (Mu f) where
show (In x) = show x

  Of course that constraint didn't work.

 Well, it can if we write it in a bit different way:

 instance (Show (f (Mu f))) = Show (Mu f) where
show (In x) = show x

 instance Show (N (Mu N)) where
show Z = Z
show (S k) = S ++show k

 *Main show (In (S (In (S (In Z)
 S S Z

 This solution is akin to that of breaking the type recursion when
 defining the fixpoint combinator fix. Only here we face the recursion
 on constraints. I believe the same solution should work for the
 Observable class. You didn't post the definition of the Observable
 class, so I couldn't test my claim.

 Flags used:
   ghci -fglasgow-exts -fallow-undecidable-instances  /tmp/a.hs

Thanks for this solution.

You can get HOOD from the libraries page on haskell.org. It (Observe.lhs)
defines observable.

I still want the instance (Show a) = Show (N a) for showing all the
intermediate values that you get with the recursion combinators, so I
think I'll need to add -fallow-overlapping-instances.

I still think it would be useful to add a goal as an axiom while trying to
prove the anteceedents of any derivation rule that applies. Equivalently,
you could consider it accepting regular derivations rather than just
finite derivations.

I think allowing regular derivations might make it possible to find more
liberal constraints on the form of instances that would still insure the
decidability of solving for instances.

If types the form of types are restricted to explicit recursion, varients,
and tuples:

T := mu X.T | T1|T2|..|Tn | (T1,T2,..,Tn)

Then deriving an instance is decidable so long as the context of an
instance mentions only subexpressions of the head. (because there are only
a finite number of distinct subexpressions, and each time we use a rule we
add one to our set of axioms)

Of course it breaks down if you allow type operators...

Are there any papers I should read if I want to find something useful in
this vein? I just finished grabbing everything relevant in the first three
pages or so of googling for type classes. The only book on type theory I
have is Pierce's Types and Programming Languages, and I have nothing on
logic. I have a vauge idea coinduction might be useful, and an even hazier
idea that we might be able to get away with some non-regular derivation
trees.

Interesting? Useful? Should go to haskell-cafe?

Thanks for any advice

Brandon


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Calling Haskell from Java

2003-08-14 Thread Brandon Michael Moore
Is it fine if the interface uses JNI? The jvm-bridge is an excellent tool
if you can use JNI, but I don't know of anything that compiles Haskell to
java bytecode. There was a post a few years ago about an experimental Java
backend for GHC, but I haven't heard anything since, and the -J switch
doesn't do anything in a recent GHC. The Mandarin people had a version of
GHC back when they were targeting Java, but they've moved to .NET. Does
anyone know of a project (or a CVS tag) for something that can compile
Haskell to java bytecode?

The jvm-bridge project includes tools for generating a Haskell interface
to a java class, another for generating a monad that wraps the JVM
initialization your program needs, using typeclasses to model the class
hierearch and convert parameters. There is a function that will
dynamically define a class with Haskell methods.

I don't know how much support jvm-bridge provides if you want to define a
class in Haskell and package it so you can use it from a normal java
program. You would need to declare a class in java with native methods,
and compile the haskell into a suitable library providing the native
implementation. I don't remember any tools for doing this.

I might have simply forgotten or overlooked a nice interface, or you might
need to write the JNI code. If you are determined to go this way you could
at least use the JNI binding JVM provides. Rather than doing that, it's
probably simpler for your program to start in Haskell, even if all
it does is define a class and invoke your main class (passing a factory
object).

I assume your haskell with need to call java at some point, if only to
unpack a collection, so I'll pass along two things that confused me for a
while. One thing to remember is that methods are defined in the class
module for the first class that defined them. If you want to use a method
that a class inherited from an ancestor you need to create and import the
class module for that ancestor. The method will have a name like
method_ancestor_args, but it calls the correct overridden method on
whatever object you pass in a this (first argument). The other thing (this
is more a Haskell issue) is that if you are writing a GUI program the main
(Haskell) thread has to survive until the program is supposed to end, and
it needs to be inside the let java threads run combinator. (sorry, I
forgot the name and I'm away from my home machine).

What are you trying to do? I'm thinking about porting a web testing
application from python to haskell for parser combinators and monads (I'm
using objects with a run method to control execution and thread through
some state), but I don't know of any alternative to the HttpUnit library
for testing webpages with javascript. I just need to call a bit of java in
the middle of a Haskell program

Tell us how your project works out.

Brandon

On 12 Aug 2003, Immanuel Litzroth wrote:

 Calling Haskell from java was supposed to be supported by a tool
 called lambada, but all I can seen to find of that on the net is a
 paper. Does anyone have any pointers to more information/implementation.
 I specifically want to call Java-Haskell and not the other way around.
 thanks in advance
 Immanuel
 ***
 It makes me uncomfortable to see
 An English spinster of the middle class
 Describe the amorous effects of `brass',
 Reveal so frankly and with such sobriety
 The economic basis of society.
 W.H. Auden

 --
 Immanuel Litzroth
 Software Development Engineer
 Enfocus Software
 Kleindokkaai 3-5
 B-9000 Gent
 Belgium
 Voice: +32 9 269 23 90
 Fax : +32 9 269 16 91
 Email: [EMAIL PROTECTED]
 web : www.enfocus.be
 ***

 ___
 Haskell mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell




___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: Help with Exceptions on I/O

2003-08-14 Thread Brandon Michael Moore
You don't really need to change the buffering mode. stdout is line
buffered by default, so you just need to make sure a newline is printed
after your message. putStrLn adds a newline after the string it prints, or
you could use \n in the string literal. Try this:

 main = do
  --lots of code goes here,
  --with a catch handler if you want it
  putStrLn Press ENTER to exit -- with Ln
  getLine
  return ()

On Tue, 12 Aug 2003, Hal Daume wrote:

 you can write this a bit more simply as:

 main = do
   (do do-the-major stuff here
   putStr File created...)
 `catch` (\_ - show the error)
   getLine-- look ma, no -
   return ()

 now, your problem is almost certainly with buffering.  in the main do,
 put

   hSetBuffering stdout NoBuffering
   hSetBuffering stdin  NoBuffering

 you'll need to import System.IO to get these.

  --
  Hal Daume III   | [EMAIL PROTECTED]
  Arrest this man, he talks in maths.   | www.isi.edu/~hdaume

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Alexandre Weffort
 Thenorio
 Sent: Tuesday, August 12, 2003 4:17 PM
 To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: Help with Exceptions on I/O


 I have a program which creates textfiles out of other files. Since the
 program is runned from windows I output some text strings (Like File
 created succefully) and I need to stop the program before it quits so
 that
 the user can read the line outputted to know what went on and then he
 can
 press ENTER to quit the program.

 I managed to do this fine if no error occurs but when a error occurs I
 am
 having problems.

 The code goes like that

 main :: IO()
 main =catch (do
  Do all the needed stuff here
  putStr File created Successfully. Press RETURN to
 quit
  dummy - getLine --Halts the program so the user
 can
 read the above line)
  putStr Exiting now... --needed since I can't
 finish a
 do function with the dummy- getLine line) (\_ - do putStr \nFile
 not
 found. Press RETURN (ENTER) to quit.
  dumb - getLine
  putStr \nExiting...)

 So when the program runs, if the input file is found the program writes
 successfull creation of file but if the file doesn't exist, after the
 user
 gives the input filename and press enter, the program creates a new line
 and
 Halts (Probably because of the getLine function) without writing out
 anything, then when the user press ENTER again it writes the line at the
 first putStr (File not...), then writes the next putStr line under it
 (Exiting...) and exits. I don't know why it doesn't wirte the first
 line,
 halts and then when user press enter it writes the second and quits.

 Can anybody help me as I am not very familiar with exception and
 catches.


 Another question I have is: Is there any other function rather than
 getLine
 that halts a program and continues when a user press any key (Instead of
 ENTER) and besides this is an ugly code since getLine wasn't made for
 that
 but I couldn't find anything else myself.

 Thank you in advance.

 Best Regards

 Alex
 ___
 Haskell mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell





___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Type class problem

2003-08-14 Thread Brandon Michael Moore
To try some of the examples from paper Recursion Schemes from Comonads,
I wanted to define instances of Show and Observable for types defined as
the fixed point of a functor.

I defined type recursion and naturals as

newtype Mu f = In {unIn :: f (Mu f)}
data N f = S f | Z
type Nat = Mu N

An application of Mu should be showable if the functor maps showable types
to showable types, so the most natural way to define the instance seemed
to be

instance (Show a = Show (f a)) = Show (Mu f) where
  show (In x) = show x

Of course that constraint didn't work, so I made a class PShow with a
method that witnessed the constraint

class PShow f where
  pshow :: (Show a) = f a - String

Now I could define a generic show instance for Mu, and got a Show instance
for Nat by defing a PShow instance for N

instance (PShow f) = Show (Mu f) where
  show (In x) = pshow x

instance PShow N where
  pshow Z = Z
  pshow (S k) = S +show K

show (In (S (In (S (In Z) - S S Z

This works fine, but you need to be able to use the method of PClass in
the definition of Class (Mu f). I couldn't figure out how to do the same
thing with Observable (the use of the class methods is a few layers away
from the public interface).

I tried to define a set of mutaully recursive definitions

instance (Observable (f (Mu f))) = Observable (Mu f) where
  observer (In x) = send In (return In  x)
instance (Observable a) = Observable (N a) where
  observer Z = send Z (return Z)
  observer (S x) = send S (return S  x)

unfortunately, the class constraint solver doesn't doesn't like this. I
get an error message like

Constraint Stack Overflow:
  Observable (N (Mu N))
  Observable (Mu N)
  Observable (N (Mu N))
  Observable (Mu N)
  ...

I expected that that constraint solver would realize it could construct a
dictionary transformer with a type like Observer Nat - Observer Nat and
take the fixed point to get an instance for Nat. I haven't looked at the
implementation, but it seems like it would be easy to add the constraint
we are trying to derive to the list of assumptions when trying to
construct all the anteceedents of an instance declaration.

Can anyone tell me how to
1) get around this
2) modify GHC to handle this :)

Brandon



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: IO Bool - Bool

2003-08-14 Thread Brandon Michael Moore


On Thu, 14 Aug 2003, Wolfgang Jeltsch wrote:

 On Thursday, 2003-08-14, 17:05, CEST, Kevin S. Millikin wrote:
  On Wednesday, August 13, 2003 11:20 PM, Tn X-10n
  [SMTP:[EMAIL PROTECTED] wrote:
is it possible to convert IO Bool to Bool?
 
  Sure.  Which Bool do you want?  True?
 
   toTrue :: IO Bool - Bool
   toTrue x = True
 
  Or False?
 
   toFalse :: IO Bool - Bool
   toFalse x = False

There's also

boolFromIO :: IO Bool - Bool
boolFromIO = boolFromIO

if you want to be even less useful :)

 I wouldn't call these *conversion* functions because they don't look at their
 argument.

  Maybe that's not what you had in mind.

 Surely not.

 Wolfgang

I'm surprise nobody has mentioned unsafePerformIO (:: IO a - a).
As the name suggests, it isn't referentially transparent.

Are you sure you need a function of type IO Bool - Bool? What are you
trying to do?

Brandon


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Newbie Design Question

2003-08-06 Thread Brandon Michael Moore
On Tue, 5 Aug 2003, Thomas L. Bevan wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 I don't see that the contibutors files are fundementally different.
 - From what I understand, it should be possible to write a
 generic function,

   importCSV :: FilePath - IO [ (String,String) ]

 where the 1st value is a field name and the 2nd the value extracted from
 the file.

The extracted fields and values should probably be grouped into records,
like [[(String,String)]], or [FiniteMap String String] (in GHC).

The read function should probably be paramaterized over a description of
the formats. Maybe there should be different read functions for differnent
families of files (fixed fields, comma delimited, XML, etc)

  importFixed :: FixedFormat - FilePath - IO [[(String,String)]]
  importComma :: CommaFormat - FilePath - IO [[(String,String)]]
  importXML   ::   XMLFormat - FilePath - IO [[(String,String)]]

The *Format types are supposed to represent the exact format of the file.
For example, you might represent the usename examples as
[Field First Name 10,Field Last Name 10] (with a suitable Field type)

If the file descriptions are instances of Read they could easily be loaded
from files at runtime.

 As for the mapping to XML, I would suggest using HaXml and writing a
 set of functions that write the value for a field into the correct place.

 These could then be indexed in another tuple list of type,
   [(String, String - CFilter)]

I think the idea here is that the functions take a string and produce a
filter that will transform the xml represntation of a record by adding
that field. If that is the intent of the type, I think the filters might
be fairly complicated, but it should deal nicely with missing fields.

Another approach would be to use DTD translation and the writeXML class.
The DTD2Haskell tool will produce a data type that might be more
convenient to work with than building your XML with generic combinators.

Brandon

 Then draw it all together with a liberal use of map and fold.

 Tom

 On Mon, 4 Aug 2003 10:05 pm, Tanton Gibbs wrote:
  Haskellers,
 
  I'm currently working on my first Haskell program.  I really like the
  language so far, though it has been hard to break the OO frame of mind.
  Normally, I'm a C++/Perl programmer, but I have really enjoyed the
  type-safety of Haskell over Perl as well as the Hugs interpreter over a
  make cycle.  Since execution time is not important, I haven't found any
  issues with using Hugs.
 
  Now, on to my problem.  Our company uses XML to describe our processes.
  I'm writing a DSL in Haskell that will print that XML out.  I've done most
  of it, but now I'm describing the I/O in the XML.  Basically, we have files
  come in in various formats from various file contributors and we use a
  mapping program to map the fields to the right spots in the XML.  So, for
  example, one contributor could have the following layout:
 
  Field Name Start Pos Length
  First Name,  1,   10
  Last Name,  11,  10
 
  and another contributor could have them in exactly the opposite order.
  Furthermore, some contributors may have fields that others don't (for
  example: DOB).
 
  I'm now trying to come up with a way to map this information into Haskell
  dynamically.  It would be trivial to create a module for each Contributor
  and then import all the modules.  However, we have hundreds of contributors
  and it would be a pain to have to change and re-release the program every
  time we added a new contributor.  Therefore, I need someway to take this
  information and map it into my Haskell structures.  I have come up with two
  options.  One would be to let Haskell dynamically import a contributor
  module (but I don't think that is currently possible); the other would be
  to use a configuration file and have the user specify the information
  through it.  Is there a standard Config file module?  Is there any other
  way to do this that I missed?
 
  Thanks in advance,
  Tanton Gibbs, Ph.D.
  Technical Architect
  Abilitec Knowledge Base
  Acxiom Corporation
  Conway, AR 72034
 
  ___
  Haskell-Cafe mailing list
  [EMAIL PROTECTED]
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.2.2 (GNU/Linux)

 iD8DBQE/L3z7Yha8TWXIQwoRAmhJAJ4hLxqRC7DmTE4PYRAG5mzKS7/Z5gCgj/QF
 1zyl21u0fEMwkJeUWy8QVxs=
 =fjzf
 -END PGP SIGNATURE-

 ___
 Haskell-Cafe mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell-cafe






___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Lazy Parsing

2002-02-27 Thread Brandon Michael Moore

I'm wondering if there are any libraries out there for creating parsers
that lazily build up their result. I know I could thread the remaining
input through a parser by hand, but it seems like someone should have
already done it.

I'd like to be able to turn a stream of XML into a lazy tree of tags
(probably Maybe tags, or Either errors tags), but I don't think HaXml and
the like do that sort of thing.

Branodn Moore

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe