[Haskell] Relational & Algebraic Methods --- RAMiCS 2014 --- Final CFP, with Deadlines extended!

2013-10-26 Thread Wolfram Kahl
 (NICTA, Australia; Publicity chair)
Ali Jaoua(Doha, Qatar)
Peter Jipsen (Chapman U., USA; PC co-chair)
Wolfram Kahl (McMaster U., Canada; PC co-chair)
Tadeusz Litak(Erlangen, Germany)
Larissa Meinicke (U. Queensland, Australia)
Szabolcs Mikulas (London, UK)
Bernhard Möller  (Augsburg, Germany)
Martin E. Müller (St. Augustin, Germany; General chair)
José Oliveira(U. Minho, Portugal)
Ewa Orłowska (Warsaw, Poland)
Matthew Parkinson(Microsoft Research, UK)
Damien Pous  (CNRS, France)
Ingrid Rewitzky  (Stellenbosch, South Africa)
Holger Schlingloff   (Berlin, Germany)
Gunther Schmidt  (Munich, Germany)
Renate Schmidt   (Manchester, UK)
Georg Struth (Sheffield, UK)
George Theodorakopoulos  (Cardiff, UK)
Michael Winter   (Brock U., Canada)



Steering Committee
--

Rudolf Berghammer  (Kiel, Germany)
Jules Desharnais   (Laval U., Canada)
Harrie de Swart(Rotterdam, Netherlands)
Ali Jaoua  (Doha, Qatar)
Bernhard Möller(Augsburg, Germany)
Ewa Orłowska   (Warsaw, Poland)
Gunther Schmidt(Munich, Germany)
Renate Schmidt (Manchester, UK)
Michael Winter (Brock U., Canada)



Organising Committee


Martin E. Müller, Sankt Augustin, Germany: Conference Chair, Local Organiser
Peter Höfner, NICTA, Australia: Publicity
Peter Jipsen, Chapman U., USA: PC Co-Chair
Wolfram Kahl, McMaster U., Canada: PC Co-Chair

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: Characterizations of H-M type inference?

1998-04-24 Thread Wolfram Kahl

Frank A. Christoph <[EMAIL PROTECTED]> writes:

 > Does anyone know if Hindley-Milner type inference has been characterized in
 > a non-operational way?  I mean either some sort of canonical correspondence
 > (as the simply typed lambda-calculus with intuitionistic natural deduction)
 > or some statement that describes it in terms of a universal property, like
 > in category theory.  For that matter, is there any work at all which has
 > characterized the notion of type inference (not just H-M) for functional
 > and/or related languages in a declarative manner, perhaps in terms of proof
 > theory?


For typed term graphs, I am providing a declarative typing system
without any notion of type inference in (BibTeX below):


http://diogenes.informatik.unibw-muenchen.de:8080/
kahl/PAPERS/Kahl-1997d_UFDeclTy_colour.ps.gz  -- For colour media
kahl/PAPERS/Kahl-1997d_UFDeclTy_bw.ps.gz  -- For monochrome media


This type system does not cover Hindley-Milner let-polymorphism,
but extends parametric polymorphism in other ways.

It can be seen in action in the term graph programming
and program transformation system HOPS:


http://diogenes.informatik.unibw-muenchen.de:8080/kahl/HOPS/



Best regards,

Wolfram

===
@TechReport{Kahl-1997d,
  author =   {Wolfram Kahl},
  title ={A Framework for User-friendly Declarative Typing Systems},
  institution =  {Fakult\"at f\"ur Informatik,
   Universit\"at der Bundeswehr M\"unchen},
  year = 1997,
  number =   9704,
  address =  {D-85577 Neubiberg},
  month =DEC,
  abstract =  {Advanced type systems frequently pose problems to the
  user, as can already be observed with the still
  rather simple type systems currently in use
  e.g.\null{} in Haskell or ML: error messages
  signaling type errors frequently give complex types
  and little clue where the real source of the error
  is.

  We present a class of typing systems that are
  designed for online use, preventing input of
  untypeable constructs and guiding the user through
  the construction of programs with complex
  types. Since this implies making explicit the
  relation between programs and their types, the
  choice formalism for this purpose is that of term
  graphs, where much more program structure can
  usefully be made explicit than in linear notations.

  After presenting a framework for the definition of
  typing systems in term graphs, we discuss the
  advantages of this approach and sketch applications
  to e.g.\null{} polytypic programming.}
}





Re: Proofs and development

1998-05-28 Thread Wolfram Kahl

Alan Grover ([EMAIL PROTECTED]) writes (on the Clean list):

 > I'd then like to modify the source code adding the better algorithm, but I'd
 > like to keep the original algorithm as the documentation. The
 > language/system should then help me prove that the better version is
 > equivalent. I feel that this helps keep the source code understandable, and
 > connects it more directly with the requirements and specifications.
 >  
 > ...
 >  
 >  
 > This all has relation to literate programming, does anyone else have
 > interest in that?

This kind of approach is big part of the motivation behind my
graphically interactive strongly typed term graph program development
and transformation system HOPS (work in progress), see

URL: http://diogenes.informatik.unibw-muenchen.de:8080/kahl/HOPS/

HOPS in principle is a language-independent term graph programming
framework with support for user-defined text generation from term graphs.
The only constraint are the kind of term graphs supported (with
explicit variable binding and variable identity, and with second-order
rewriting) and the typing system (essentially simply-typed lambda-calculus
with type variables).

HOPS distinguishes

- ``declarations'' (type signatures in Haskell) that introduce a new
  identifier with its typing, and

- ``rules'', which are term graph transformation rules normally
  considered as semantics preserving equations.

  Rules in a certain shape can be used as definitions in functional
  programming languages; other rules may serve as theorems or lemmata.

  The possibility of classifying declarations and rules into groups
  according to their perceived role is not yet fully implemented;
  considerable flexibility is needed here since among the many possible
  definitions: one may be the ``documentation version'', one may be
  best for this compiler, another may be best for that compiler,
  yet another may be best as a starting point for transformation into
  a different paradigm. 


However, attributes on types (such as uniqueness or Haskell's type classes)
are not implemented yet, and currently types are always maximally identified,
which is incompatible with unique types.

Therefore, the current version of HOPS best supports a Haskellish style
of programming, and in the supplied rudimentary standard library
only output to Haskell is currently supported (to a certain extent).



Wolfram





Re: Standard Haskell

1998-09-09 Thread Wolfram Kahl

On Tue, 8 Sep 1998, Stephen H. Price <[EMAIL PROTECTED]> wrote:

   On Mon, 7 Sep 1998, Simon Peyton-Jones wrote:
   > 
   > * Incidentally, I'm leaning towards 'Haskell 98' as the name.
   > 
   A couple of minor points:
a) Haskell 1998 would be more appropriate in the light of Year 2000
   problems.
b) Dating product names like this tends to give the impression that this
   is a snapshot of a continually developing environment, such as Office, 
   which gains bells and whistles every year.
   However, Standard Haskell is to be a solid specification, which will
   be largely unchanging.  Couldn't this just be named 'Haskell' with no
   numbering, or maybe 'Haskell!' to emphasize the solidness of the
   specification.
   Perhaps 'Just Haskell' is a neat little pun if a more elaborate naming
   scheme is required.

Why has ``Haskell 1.5'' disappeared form the discussion?

In view of the Haskell 2 discussion this would make it clear
that Haskell 1.5 is the current end of one development path,
and would also leave room for small improvements to 1.6
long after Haskell 2.1, 2.2, ..., X.Y have been released.

If the Haskell versions are numbered anyway, as so far in 1.2, 1.3, 1.4, 2,
so why should the ``stable version'' have a name that leaves one without
any orientation with respect to all the other versions?


Wolfram





Re: Haskell 98 library: Directory.lhs

1999-03-11 Thread Wolfram Kahl


Simon Peyton-Jones proposes:

 >  A Haskell 98 addendum
 [ ... ]
 > Well, the bits are frozen, but I propose to regard this as a gross
 > "typo" and add it to the typos page. 
 [ ... ]
 > So the "typo" fix I propose is
 [ ... ]

 > Any objections?



Call it Haskell 1.6 ;-)


Best,

Wolfram






Re: Here's a puzzle to fry your brains ....

1999-04-29 Thread Wolfram Kahl


John Launchbury posed a nice puzzle about
mutual recursive bindings in the do notation:



test :: [Int]
test = do (x,z) <- [(y,1),(2*y,2), (3*y,3)]
  Just y <- map isEven [z .. 2*z]
  return (x+y)

isEven x = if even x then Just x else Nothing

-

I would consider it as equivalent to:

-

testTrans :: [Int]
testTrans = do (x,z) <- [(\y -> y,1),(\y -> 2*y,2), (\y -> 3*y,3)]
   Just y <- map isEven [z .. 2*z]
   return (x y+y)

isEven x = if even x then Just x else Nothing

-

which hugs accepts and evaluates to [4,6,12,16,24].

The principle is to consider variables (x) bound to expressions containing
later-bound variables (y) as functions accepting values for y.
When x is used, the current y is supplied.


MORE GENERAL would be to consider the following equivalent version
of the original program:

-

test' :: [Int]
test' = do (x,z) <- return (y,1)   ++
return (2*y,2) ++
return (3*y,3)
  Just y <- map isEven [z .. 2*z]
  return (x+y)

-

and adapt all occurrences of return
to returning functions accepting the later-bound variables.

Patterns fed by such returns are replaced by new function variables (f).

Variables bound in patterns fed by such returns
occur in two situation:

1: (z) before the later bindings:
  Apply to undefined and extract from the pattern,

2: (x) after the later bindings:
  Apply to the now bound variable and extract from the pattern.

-
testTrans' :: [Int]
testTrans' = do f <- return (\y -> (  y,1)) ++
 return (\y -> (2*y,2)) ++
 return (\y -> (3*y,3))
let z = snd (f undefined)
Just y <- map isEven [z .. 2*z]
let x = fst (f y)
return (x + y)
-

Hugs says:

Main> testTrans'
[4,6,12,16,24]


How about it?


Best regards,

Wolfram





Re: {-# rules

1999-05-03 Thread Wolfram Kahl


With respect to the new RULES mechanism presented by
Simon Peyton Jones (Thu, 22 Apr 1999),
Carsten Schultz <[EMAIL PROTECTED]> writes
 > [...]
 > > And what about algebraic simplification? Say,
 > The same applies to our beloved monads.
 > The compiler could be told about the monad laws.

Somebody else (Olaf Chitil?) already remarked that most ``Monads'' aren't.

So to be more consistent,
how about changing the current class Monad into
(motivation see signature ;-):

> class HasMonadOps m where
>   return :: a -> m a
>   >>= :: m a -> (a -> m b) -> m b
>   ...

and introduce:

> class (HasMonadOps m) => Monad m where
> {-# RULES
>   "Monad-rightId" forall f.f >>= return=  f
>   "Monad-leftId"  forall f,x.  return x >>= f  =  f x
>   ...
>  #-}

Even if some still will not like this, would it be within
the scope of the current RULES mechanism?

Would others agree that is is desirable to have
class declarations like this accepted?

One might even imagine extended instance declarations
like the following:

> instance Monad []
> {-# PROOF
>  "Monad-rightId"
>   forall f.f >>= return
> =  concat (map return f)
> =  concat (map (:[]) f)
> =  case f of
>  [] -> concat []
>  (x:xs) -> concat ([x]:map (:[]) xs)
> =  case f of
>  [] -> []
>  (x:xs) -> x : concat (map (:[]) xs)
> =  f
>  "Monad-leftId" ...
>  ...
>  #-}

In the end, my dreams even include a proof format
that can be checked by the compiler :-)



Best regards,

Wolfram


=


  #   ``Zheng Ming'' --
##   #Getting the names right
  #    
  ##   #
  #   #   # Ancient Chinese motto,
 ## ## ###  fervently supported
 ##  ## #   also by Confucius
 ###   #   
 ##   #
 ## #  
 ####   #  
 ##  ## #   #  
 ####   #   #  
 ## #   #  
##  #  
#   #  






Re: Projects using HUGS or Haskell

1999-06-11 Thread Wolfram Kahl

Simon Peyton-Jones <[EMAIL PROTECTED]> writes:
 > What I have not done (any volunteers) is to export these rules, or
 > the function definitions to a thm prover.  

I am in the course of exporting function definitions
(and later probably also rules)
to the term graph transformation system HOPS
(
  URL: http://www2.informatik.unibw-muenchen.de/kahl/HOPS/
)
which can also be considered as a fledgling theorem prover ---
anyway I expect the problems to be essentially the same.

I am trying to finish a short summary next week...

Wolfram






Re: Deriving Enum

1999-07-11 Thread Wolfram Kahl

Koen Claessen <[EMAIL PROTECTED]>
proposes the following diagonalisation function:
 > 
 >   [ (a,b) | (a,b) <- [1..] // [1..] ]
 > 
 > For a suitable definition of (//), for example:
 > 
 >   (//) :: [a] -> [b] -> [(a,b)]  
 >   xs // ys = diagonalize 1 [[(x,y) | x <- xs] | y <- ys]
 >where
 > diagonalize n xss = 
 >   xs ++ diagonalize (n+1) (xss1 ++ xss2)
 >  where
 >   (xs,xss1) = unzip [ (x,xs) | (x:xs) <- take n xss ]
 >   xss2  = drop n xss
 > 
 > And it works for any type.

The core function here is

> (diagonalize (1 :: Integer)) :: [[a]] -> [a]

This function diagonalises finite or infinite lists
with arbitrary finite or infinite element lists.


To me, it seems unsatisfactory to have a solution to this pure list problem
with auxiliary functions relying on integers.


It turns out to be a nice exercise to implement

> diagonalise :: [[a]] -> [a]

without any reference to numbers.

Spoiler ahead:
I arrive at the appended version (which does not even use pairs).


Have fun!

Wolfram




   Warning:  Spoiler ahead!!



















> diagonalise :: [[a]] -> [a]
> diagonalise = f id
>  where
>   f   :: ([[a]] -> [[a]]) -> [[a]] -> [a]
>   f a []  =  split id (a []) []
>   f a (l:ls)  =  split id (a [l]) ls
>   split   :: ([[a]] -> [[a]]) -> [[a]] -> [[a]] -> [a]
>   split a [] r=  f a r
>   split a ([] : ls) r =  split a ls r
>   split a ((x:xs) : ls) r =  x : split (a . (xs :)) ls r






Re: Deriving Enum

1999-07-12 Thread Wolfram Kahl

Mark P Jones <[EMAIL PROTECTED]> writes:
 > 
 > Here's my definition of an integer free diagonalization function.
 > [..] As written, I think
 > it is a nice example of programming with higher-order functions,
 > and, in particular, using function composition to construct a
 > pipelined program:
 > 
 > > diag :: [[a]] -> [a]
 > > diag  = concat . foldr skew [] . map (map (\x -> [x]))
 > > where skew [] ys = ys
 > >   skew (x:xs) ys = x : comb (++) xs ys
 > 
 > This uses an auxiliary function comb, which is like zipWith
 > except that it doesn't throw away the tail of one list when it
 > reaches the end of the other:
 > 
 > > comb:: (a -> a -> a) -> [a] -> [a] -> [a]
 > > comb f (x:xs) (y:ys) = f x y : comb f xs ys
 > > comb f [] ys = ys
 > > comb f xs [] = xs

This is in fact much nicer and much more intuitive than my version!

The only improvement that comes to my mind is to apply the equality

   ([x] ++) = (x :)

wherever possible:

> diag :: [[a]] -> [a]
> diag  = concat . foldr skew []
> where skew [] ys = ys
>   skew (x:xs) ys = [x] : conscomb xs ys

> conscomb   :: [a] -> [[a]] -> [[a]]
> conscomb (x:xs) (y:ys) = (x : y) : conscomb xs ys
> conscomb [] ys = ys
> conscomb xs [] = map (:[]) xs

Perhaps this looks slightly less elegant, but it claws back quite
some efficiency: I compiled both versions (as DiagMPJ and DiagMPJ1)
and my original version (as DiagWK) with ghc-4.04 (perhaps a week old)
with ``-O'' and with the following test module
(substitute the respective definitions for ``@1''):

> module Main where

> import System

@1

> test = [[(x,y) | x <- [1..]] | y <- [1..]]
> main = do
>  (arg : _) <- System.getArgs
>  let n = (read arg :: Int)
>  print (length (show (take n (diag test

(I also tried Tom Pledgers second version, but it runs out of stack space...)

Best times out of five runs where:


Argument:  220  200

DiagMPJ   0:00.16  0:02.32  0:37.55
DiagMPJ1  0:00.12  0:01.50  0:23.83
DiagWK0:00.11  0:01.33  0:19.10


This is not the first time that I get the impression that,
at least with current implementations,
functions are unbeatable when it comes to efficient data structures ---
perhaps this is why it is called functional programming ;-)


Cheers,

Wolfram






Diagonalisations (was: Re: Deriving Enum)

1999-07-12 Thread Wolfram Kahl


In the meantime I have discovered a flaw in my original
diagonalisation in that it looped in finite cases.

This can easily be mended:

DiagWK1:

> diag :: [[a]] -> [a]
> diag = f id where
>  f :: ([[a]] -> [[a]]) -> [[a]] -> [a]
>  f a [] = split id (a []) []
>  f a (l:ls) = split id (a [l]) ls
>  split :: ([[a]] -> [[a]]) -> [[a]] -> [[a]] -> [a]
>  split a [] [] = diag (a [])  -- this line was originally missing
>  split a [] r = f a r
>  split a ([] : ls) r = split a ls r
>  split a ((x:xs) : ls) r = x : split (a . (xs :)) ls r

Now we can eliminate f:

DiagWK2:

> diag :: [[a]] -> [a]
> diag [] = []
> diag (l:ls) = split id [l] ls
>  where
>   split :: ([[a]] -> [[a]]) -> [[a]] -> [[a]] -> [a]
>   split a [][] = diag (a [])
>   split a [](l:ls) = split id   (a [l]) ls
>   split a ([] : ls) r  = split als  r
>   split a ((x:xs) : ls) r  = x : split (a . (xs :)) ls  r

Another, equivalent version:

DiagWk3:

> diag :: [[a]] -> [a]
> diag [] = []
> diag (l:ls) = split id [l] ls
>  where
>   split :: ([[a]] -> [[a]]) -> [[a]] -> [[a]] -> [a]
>   split a [] ls  = case ls of
>  [] -> diag (a [])
>  (l:ls) -> split id   (a [l]) ls
>   split a (l : ls) r = case l of
>  [] -> split als  r
>  (x:xs) -> x : split (a . (xs :)) ls  r

The timimgs are now:


   220  200

DiagMPJ   0:00.16  0:02.32  0:37.55
DiagMPJ1  0:00.12  0:01.50  0:23.83
DiagWK1   0:00.12  0:01.34  0:19.02
DiagWK2   0:00.12  0:01.35  0:19.09
DiagWK3   0:00.12  0:01.34  0:18.82


The only thing that surprises me is
that the compiler does not do the optimization from DiagWK2 to DiagWK3 itself.


Wolfram





Re: diagonalisation

1999-07-12 Thread Wolfram Kahl

Stefan Kahrs <[EMAIL PROTECTED]> writes:

 > I liked Mark's version, but let me add a related challenge.
 > 
 > I defined a translation of //-list comprehensions
 > that is analogous to the foldr-translation of list comprehensions
 > (which optimises the map-concat approach) but works in the infinite case
 > as well.
 > 
 > This requires an operator  foldrinf, of type:
 > 
 > foldrinf :: (a -> [b] -> [b]) -> [a] -> [b] -> [b]
 > 
This is actually

   foldrinf :: (a -> [b] -> [b]) -> [b] -> [a] -> [b]

 > The idea is that the [a] list is our input, and the
 > [b] list is an already-built result which we have to interleave
 > with putting further elements into this using the input list
 > and our operation (the first paramater).
 > 
 > A translation using this operator would be something like this:
 > start with
 > 
 >  pairdiag = [(x,y) // x<-[1..], y<-[2..]]
 > 
 > and the translation gives
 >  pairdiag = foldrinf newop1 [] [1..]
 > newop1 x y = foldrinf (newop2 x) y [2..]
 > newop2 x y z = (x,y) : z
 > 
 > And diag itself can be written as:
 >  diag xs = [ x // y<-xs, x<-y ]
 > which translates into:
 >  diag xs = foldrinf op [] xs
 >  op x y = foldrinf (:) y x
 > 
 > The operator foldrinf can be defined as follows:
 > 
 > foldrinf op ys xs =
 > folly ys xs 0
 > where
 > folly ys (x:xs) n = op x (aux n ys)
 > where aux 0 bs = folly bs xs (n+1)
 >   aux n (b:bs) = b : aux (n-1) bs
 >   aux n [] = foldr op [] xs
 > folly ys [] n = ys
 > 
 > But, of course, this is ugly, with numbers and stuff...
 > So the challenge is to clean up this version so that it
 > looks just as nice as Mark's diag operator.
 >

What the hell is going on here?  ;-)

Trying to understand, I first strived to make aux more self-contained:

DiagSK1:

> diag xs = foldrinf op [] xs
> op x y = foldrinf (:) y x

> foldrinf :: (a -> [b] -> [b]) -> [b] -> [a] -> [b]
> foldrinf op ys xs =
>folly ys xs 0
>where
>folly ys [] n = ys
>folly ys (x:xs) n = op x (aux n n ys xs)
>   where aux m 0 bs [] = bs
> aux m 0 bs (x:xs) = op x (aux (m+1) (m+1) bs xs)
> aux m n (b:bs) xs = b : aux m (n-1) bs xs
> aux m n [] xs = foldr op [] xs

Then eliminate folly and make aux fully independent:

DiagSK2:

> diag xs = foldrinf op [] xs
> op x y = foldrinf (:) y x

> foldrinf :: (a -> [b] -> [b]) -> [b] -> [a] -> [b]
> foldrinf op ys [] = ys
> foldrinf op ys (x:xs) = op x (aux op 0 0 ys xs)

> aux :: (b -> [c] -> [c]) -> Integer -> Integer -> [c] -> [b] -> [c]
> aux op m 0 bs [] = bs
> aux op m 0 bs (x:xs) = op x (aux op (m+1) (m+1) bs xs)
> aux op m n [] xs = foldr op [] xs
> aux op m n (b:bs) xs = b : aux op m (n-1) bs xs


This allows to play around with aux, understand what it does,
and reprogram it:

DiagSK3:

> diag xs = foldrinf op [] xs
> op x y = foldrinf (:) y x

> foldrinf :: (a -> [b] -> [b]) -> [b] -> [a] -> [b]
> foldrinf op ys [] = ys
> foldrinf op ys (x:xs) = op x (aux op split0 split1 ys xs)

> type Split a = [a] -> ([a] -> [a] , [a])

> split0, split1 :: Split a
> split0 l = (id,l)
> split1 = sshift split0

> sshift :: Split a -> Split a
> sshift split l = let p@(h,t) = split l
>in case t of
> [] -> p
> (x:xs) -> ((h . (x :)) , xs)

> aux :: (a -> [b] -> [b]) -> Split b -> Split b -> [b] -> [a] -> [b]
> aux op init follow bs xs =
>   let (bh,bt) = init bs
>   in bh (if null bt
>  then foldr op [] xs
>  else case xs of
>[] -> bt
>(x : xs) -> op x (aux op follow (sshift follow) bt xs))

Now we can observe that the to split arguments always follow each other,
so we can eliminate one:

DiagSK4:

> diag xs = foldrinf op [] xs
> op x y = foldrinf (:) y x

> foldrinf :: (a -> [b] -> [b]) -> [b] -> [a] -> [b]
> foldrinf op ys [] = ys
> foldrinf op ys (x:xs) = op x (aux op split0 ys xs)

> type Split a = [a] -> ([a] -> [a] , [a])

> split0 :: Split a
> split0 l = (id,l)

> sshift :: Split a -> Split a
> sshift split l = let p@(h,t) = split l
>in case t of
> [] -> p
> (x:xs) -> ((h . (x :)) , xs)

> aux :: (a -> [b] -> [b]) -> Split b -> [b] -> [a] -> [b]
> aux op split bs xs =
>   let (bh,bt) = split bs
>   in bh (if null bt
>  then foldr op [] xs
>  else case xs of
>[] -> bt
>(x : xs) -> op x (aux op (sshift split) bt xs))

Unfortunately, however, this time it wasn't really worth all the effort:

   220  200

DiagMPJ   0:00.16  0:02.32  0:37.55
DiagMPJ1  0:00.12  0:01.50  0:23.83
DiagMPJ2  0:00.12  0:01.50  0:24.35
DiagWK1   0:00.12  0:01.34  0:19.02
DiagWK2   0:00.12  0:01.35  0:19.09
DiagWK3   0:00.12  0:01.34  0:18.82
DiagSK0:01.01  0:45.25
DiagSK3   0:01.13  1:35.33
DiagSK4   0:01.17  1:35.77

We can claw back a little bit by using a more 

Re: Deriving Enum

1999-07-13 Thread Wolfram Kahl

Jon Fairbairn ([EMAIL PROTECTED]) writes:
 > On 11 Jul, Wolfram Kahl wrote:
 > >  [...]
 > >  The core function here is
 > >  
 > > > (diagonalize (1 :: Integer)) :: [[a]] -> [a]
 > >  
 > >  This function diagonalises finite or infinite lists
 > >  with arbitrary finite or infinite element lists.
 > >  
 > >  [..]
 > I got rather lost in the ensuing discussion, so I've composed this
 > reply from back here.  It seems to me that the core function is (//),
 > which can be written like this:
 > 
 > > module Diagonalise ((//)) where
 > > import List
 > 
 > A diagonalisation function that doesn't use numbers.
 > 
 > > (//):: [a] -> [b] -> [(a,b)]
 > 
 > [...]
 > Or have I totally lost it?

I confess guilty to have diverged from this simpler problem

> (//) :: [a] -> [b] -> [(a,b)]

to the more general problem

> diagonalise :: [[a]] -> [a]

The first can be reconstructed from the latter via:

> (//) as bs = diagonalise [ [ (a,b) | b <- bs ] | a <- as ]

HTH!


Wolfram





Re: diagonalisation

1999-07-15 Thread Wolfram Kahl

Hi,

 you write (message from Tom Pledger on Thu, 15 Jul 1999 13:33:06 +1200 (NZT))
(and I hope you do not mind that I send this response to the list, too):
 > 
 > Someone was saying that my diag function ran out of space, but I've
 > lost track of the message.  Was it yours?
 > 
Yes, it was mine.

 > Anyway, just in case it was, here's what I suspect happened:
 > 
 > > diagMJ :: [[a]] -> [a]
 > > diagWK :: [[a]] -> [a]
 > > diagTP :: [[a]] -> [[a]]
 > 
 > The following are comparable tests:
 > 
 > > diagMJ [[(x,y) | x <- [1..]] | y <- [1..]]
 > > diagWK [[(x,y) | x <- [1..]] | y <- [1..]]
 > > diagTP [[1..],[1..]]
 > 
 > This, on the other hand, represents a diag problem in infinitely many
 > dimensions

Now I understand the point of your definition! Nice!

For the sake of completeness, here is that definition again:

> diagTP :: [[a]] -> [[a]]
> diagTP = foldr (diag2 []) [[]] where
>   diag2 [] [] _  = []
>   diag2 _  _  [] = []
>   diag2 zs (x:xs) ys = (zipWith (:) (x:zs) ys) ++ diag2 (x:zs) xs ys
>   diag2 zs [] (_:ys) = (zipWith (:) zs ys) ++ diag2 zs [] ys

Sorry I did not really look into it before!

So, relating to what has been discussed on the list in the meantime,
your definition generalises

> (//) :: [a] -> [b] -> [(a,b)]

to any finite number of arguments, i.e., dimensions,
something which is not so easy to achieve with the

 > diag :: [[a]] -> [a]

approach that has the number of dimensions in its typing.

 > and AFAIK *must* run out of something:
 > 
 > > diagTP [[(x,y) | x <- [1..]] | y <- [1..]]
 >

So I should diagonalise over that result  ;-^)

> test1 = diagWK (diagTP [[(x,y) | y <- [0..]] | x <- [0..]])

Unfortunately even that loops.
This is because your definition has to insist on
finding the last element of ys before returning a first element.


There is, of course, the fundamental problem in here that
there are more infinite lists of natural numbers than fit
even into an infinite list of lists of natural numbers.

Therefore we have to abandon some of our expectations.

Something we still may try to achieve, however,
is a version of diagTP that accepts infinite lists as arguments,
and then returns, in a certain sense, *as many inifinite lists as possible*,
i.e., a function

> diagINF :: [[a]] -> [[a]]

such that (with any diag from diagMPJ, diagSK, diagJF, diagWK)

> test2 = diag (map prefixes (diagINF (repeat [0..])))

eventually reaches every finite sequence of natural numbers
(the problem NOT being test2 itself, but diagINF!)
with

> prefixes [] = [[]]
> prefixes (x:xs) = [] : map (x:) (prefixes xs)

I hope it will be sufficient to restrict argument lists for diagINF
to contain only NONEMPTY lists.
It may be easier to demand that all element lists are infinite.

It should, however, not be relevant whether the argument list itself
is finite or not, or should it?



Best regards,

Wolfram






Diagonalisation

1999-07-14 Thread Wolfram Kahl


Jón Fairbairn  <[EMAIL PROTECTED]> writes:
 > 
 > > diagonalise:: [[a]] -> [a]
 > > diagonalise l = d [] l
 > 
 > 
 > > d [] [] = []
 > > d acc [] = --  d [] acc would do, but muddles the order;
 > >heads acc ++ d (rests acc) []
 > > d ls (l1:rest) = heads (ls') ++ d (rests ls') rest
 > >  where ls' = l1: ls
 > 
 > > heads l = [a | (a: _) <- l]
 > > rests l = [as | (_: as) <- l]
 > 
This differs from the versions given by Mark Jones, Stefan Kahrs and myself
in that it does the diagonals ``upside down''.

However, methodologically it is actually very close to my version,
and even closer to some predecessor of that that I do not have anymore.
In fact, if I introduce into DiagWK3 the corresponding muddling avoidance,
we get:

DiagWK4:

> diag :: [[a]] -> [a]
> diag [] = []
> diag (l:ls) = split id [l] ls
>  where
>   split :: ([[a]] -> [[a]]) -> [[a]] -> [[a]] -> [a]
>   split a [] ls  = case ls of
>  [] -> split id [] (a [])
>  (l:ls) -> split id (a [l]) ls
>   split a (l : ls) r = case l of
>  [] -> split a ls r
>  (x:xs) -> x : split (a . (xs :)) ls r


Then a tiny change turns the diagonals ``upside down'', too:

DiagWK5:

> diag :: [[a]] -> [a]
> diag [] = []
> diag (l:ls) = split id [l] ls
>  where
>   split :: ([[a]] -> [[a]]) -> [[a]] -> [[a]] -> [a]
>   split a [] ls  = case ls of
>  [] -> split id [] (a [])
>  (l:ls) -> split id (l : (a [])) ls -- CHANGED!!
>   split a (l : ls) r = case l of
>  [] -> split a ls r
>  (x:xs) -> x : split (a . (xs :)) ls r

The timing differences between DiagJF and DiagWK5 are probably
mostly due to some combination of

* the double list analysis effort in heads and rests
* and the tradeoff between lists with (++) and functions with (.).

It might be interesting to look into the compiled code in detail.


   220  200  500

DiagMPJ   0:00.16  0:02.32  0:37.55  1:48.09
DiagMPJ1  0:00.12  0:01.50  0:23.83  1:06.71
DiagMPJ1a 0:00.12  0:01.47  0:23.54  1:06.04

DiagJF0:00.13  0:01.64  0:21.85  0:57.99

DiagWK3   0:00.12  0:01.34  0:18.82  0:52.31
DiagWK4   0:00.11  0:01.33  0:19.29  0:53.22
DiagWK5   0:00.12  0:01.34  0:18.92  0:52.06


Wolfram





Re: instance overlapping

1999-07-18 Thread Wolfram Kahl

Nguyen Phan Dung <[EMAIL PROTECTED]> writes:
 > 
 > If I have the following declaration:
 > 
 > class Soup where
 > ...
 > 
 > instance Soup String where
 > ...
 > 
 > instance Soup t => Soup [t] where
 > ...
 > 
 > This will lead to an error: "instance overlapping".
 > 
 > Is there anyway to solve this? 
 > (I could not make an instance for Char)

One possibility is

> newtype MyString = MyString String

> instance Soup MyString where...



For another possibility let us assume:

> class Soup a where
>   soupmethod :: souptype a

> instance Soup String where
>   soupmethod = stringsoupmethod

> instance Soup t => Soup [t] where
>   soupmethod = souplistfunctor (soupmethod :: souptype t)

A trick I am then using in these cases is the following:

> class Soup a where
>   soupmethod :: Souptype a

> class SoupList a where
>   souplistmethod :: Souptype [a]

> instance SoupList a => Soup [a] where
>   soupmethod = souplistmethod

> instance SoupList Char where
>   souplistmethod = stringsoupmethod

> instance SoupList t => SoupList [t] where
>   souplistmethod = souplistfunctor souplistmethod

Maybe it does not give you all the power you want,
but in simple cases it serves me well.


Cheers,

Wolfram







Re: diagonalisation

1999-07-20 Thread Wolfram Kahl

Hi all,

  since I have gotten into the habit to relate proposed diagonalisation
function, I will not resist this time, either ;-)

Koen Claessen <[EMAIL PROTECTED]> writes:
 > 
 >   as // bs = diag [] [ [ (a,b) | a <- as] | b <- bs ]
 > 
 >   diag current []  = diag [] current
 >   diag current (new : pending) = heads ++ diag (new : tails) pending
 >where
 > (heads,tails) = unzip [ (c,cs) | (c:cs) <- current ]
 > 
 > I thought it was short and sweet :-)

It is.  At a price, though:

Just as my first versions, it does not terminate in finite cases; try:
  
   take 20 (diag [] [[3..],[1,2]])

Otherwise, it is in the class of ``upside-down diagonals'':

Main> take 20 (diag [] [[(x,y) | y <- [0..] ] | x <- [0..]])
[(0,0),(1,0),(0,1),(2,0),(1,1),(0,2),(3,0),(2,1),(1,2),(0,3)
,(4,0),(3,1),(2,2),(1,3),(0,4),(5,0),(4,1),(3,2),(2,3),(1,4)]

and is very close to the variant of Jón Fairbairn:

> diag:: [[a]] -> [a]
> diag l = d [] l

> d [] [] = []
> d acc [] = --  d [] acc would do, but muddles the order;
>heads acc ++ d (rests acc) []
> d ls (l1:rest) = heads (ls') ++ d (rests ls') rest
>  where ls' = l1: ls

> heads l = [a | (a: _) <- l]
> rests l = [as | (_: as) <- l]

The second line of Jón Fairbairn's ``d'' handles termination;
this is what is missing in Koen Claessen's solution.
Koen Claessen's ``new'' is analysed one round earlier
in Jón Fairbairn's solution (and also in my corresponding solution DiagWK5).

The pairs in Koen Claessen's solution seem to be more expensive than
the double anaysis in Jón Fairbairn's solution:


   220  200  500

DiagKC0:00.14  0:01.73  0:24.27  1:05.41
DiagJF0:00.13  0:01.64  0:21.85  0:57.99
DiagWK5   0:00.12  0:01.34  0:18.92  0:52.06


The infinite dimensional problem is also quite fun --- I think I shall try
to accumulate some more interesting stuff in this direction and then
set out to condense it in some form suitable for publication,
perhaps even as a Functional Programming Pearl.


Wolfram





Re: Again: Referential Equality

1999-07-27 Thread Wolfram Kahl

Andreas C. Doering <[EMAIL PROTECTED]> writes:

 > 
 > I would like to have a comparison instruction that compares the internal
 > reference of two objects. 
 > Let's call it "req".
 > 
 > req :: a -> a -> Bool
 > 
 > -- of course it is an equivalence operation
 > req x x = True 
 > req x y = req y x 
 > (req x y) && (req y z) ==> req x z 
 > 
 > req x y = false does NOT imply that x and y are different, only that 
 > they are represented differently. 
 > 

Constructive/destructive ;-) proposal;

> referentially_equal :: a -> a -> IO Bool

Because it means interaction with the real world of implementation.
Perhaps put it in IOExts.


Then one might go along to define (for oneself):

> req :: a -> a -> Bool
> req a b = unsafePerformIO (referentially_equal a b)

But one should be forced to make oneself aware of the unsafePerformIO inside,
and of the non-standard language features this implies ...



Cheers,

Wolfram





Re: The dreaded layout rule

1999-07-30 Thread Wolfram Kahl

Simon Peyton-Jones <[EMAIL PROTECTED]> writes:
 > 
 > > In other words, it is a bug (and GHC and Hugs don't do it
 > > right - see my previous message; from your comment, I
 > > presume HBC also doesn't follow the definition).  I think,
 > > the only Right Thing is to remove this awful rule (unless
 > > somebody comes up with a rule that can be decided locally).
 > 
 > Maybe so.  But (H98 editors hat on) this is more than a "typo".

I am surprised!  ;-)  

 > It's a Haskell 2  issue.  Perhaps there will be no fully conforming 
 > H98 compilers!

Perhaps it would be a reasonable Haskell 1.6 issue?


Wolfram





Re: Wanted: a Business Programming Language!

1999-09-21 Thread Wolfram Kahl


J.P. Morrison <[EMAIL PROTECTED]> writes on the dataflow list:

 > 
 >  I have been in the IT business for about 40 years, and have been
 >   maintaining PL/I programs intensively for the last year or so.  In 1994
 >   I wrote a book called "Flow Based Programming" which was quite well
 >   received.  It describes a technique for developing big applications as
 >   networks of asynchronous processes communicating via streams of
 >   formatted data chunks.  In fact this technique has been in continuous
 >   use for the last 30 years at various places I have worked (surprise!),
 >   and I feel it has proven its usefulness.  It also leads very naturally
 >   to the concept of mini-languages, where a given process can be written
 >   in a language appropriate to the application area.  However, designing
 >   such a language is non-obvious!  As a result of my work, I feel pretty
 >   sure that both variables and constants are undesirable - variables
 >   because they are very sensitive to the timing of events, and constants
 >   because they usually aren't!
 > 
 >   Now, even if we replace variables by entity attributes, in most
 >   languages I am familiar with the coding schemes are usually mathematical
 >   - i.e. most variables are either binary, packed decimal, BCD, etc.  What
 >   I am looking for is a language where most values have dimension, which
 >   constrains the operations that can be performed on them.  E.g.
 > 
 >   - currency can be added, but not multiplied
 >   - currency can be multiplied or divided by non-dimensioned quantities,
 >   but not added to them
 >   - distances can be added; if multiplied, you get area; multiply again,
 >   and you get volume!
 >   - dates can be subtracted, but not added; number of days can be added or
 >   substracted to/from a date
 >   - etc.
 > 
 >   Further, dimensioned quantities are measured in units, so it would be
 >   nice to be able to add miles to kilometers - I don't care what unit is
 >   used for the result, so long as it is identified.  Currency conversions
 >   are more problematic, as a) the rate changes frequently, and b) there is
 >   often a charge, so some amount of cash has to be sent somewhere else!
 > 
 >   If your answer is OOP, it must be symmetrical (the permitted operations
 >   must depend on the types of all variables - in which case, how do you
 >   manage all the combinations?) - the popular OOP languages are
 >   asymmetrical, which is not acceptable.

Multi-parameter type classes in Haskell.

Haskell:

http://www.haskell.org/

GHC User Manual --- Section about Multi-parameter type classes:

http://www.haskell.org/ghc/docs/latest/users_guide/users_guide-5.html#ss5.5

Paper:

http://www.dcs.gla.ac.uk/~simonpj/multi.ps.gz

 >   Also, the language must be
 >   natural to non-programmers - business types ought to be able to express
 >   business rules without having to learn some abstruse notation...
 > 

Well, Haskell looks more like a kind of mathematical notation than
the kind of programming language (C, COBOL, Java) most non-programmers
are used to consider as non-natural, so it might fit your ideas ;-)

More seriously:

There has been quite some research about using Haskell as background
for domain-specific languages, so the the approach to follow would
require some real Haskell programming to set up the necessary classes
and combinators, and then business-type sticking-together
of these building blocks.

See the home page of Paul Hudak for DSL work:

http://www.cs.yale.edu/~hudak.html


A recent thread on the Haskell mailing list discussed units:

Search for ``Units'' in:

http://www.dcs.gla.ac.uk/mail-www/haskell/


This message also goes to the Haskell list.


Best regards,

Wolfram





Re: Referential Transparency

1999-10-08 Thread Wolfram Kahl


 > 
 >  | As far as I understood the matter, referential transparency
 >  | denotes the property of a language, in which variables of the
 >  | same scope do not change their value. 
 > 
 > So ML and Erlang are referentially transparent too?

Before everybody gets completely muddled up,
I point to a page where I conserve an old USENET NEWS posting
by Tom DeBoni <[EMAIL PROTECTED]> quoting the definitions
from a few relevant books:

http://www2.informatik.unibw-muenchen.de/kahl/reftrans.html


Regards,

Wolfram






Re: Partial Type Declarations

1999-01-16 Thread Wolfram Kahl

Jeffrey R. Lewis" <[EMAIL PROTECTED]> writes:
 > 
 > foo :<= C a => a -> b  roughly equiv to foo :: C _a => _a -> _b
 > 
 > I can easily imagine that you might want some variables to be a bound, and
 > others to be exact, as in
 > 
 > foo :: C a => a -> _b
 > 
 > I don't think the above can be expressed with Claus' proposal.
 > 

You could, by being more explicit than usual:

> foo :<= forall b => C a => a -> b


Wolfram



Re: Partial Type Declarations

1999-01-16 Thread Wolfram Kahl

To my last message:
 > Jeffrey R. Lewis" <[EMAIL PROTECTED]> writes:
 >  > 
 >  > foo :<= C a => a -> b  roughly equiv to foo :: C _a => _a -> _b
 >  > 
 >  > I can easily imagine that you might want some variables to be a bound, and
 >  > others to be exact, as in
 >  > 
 >  > foo :: C a => a -> _b
 >  > 
 >  > I don't think the above can be expressed with Claus' proposal.
 >  > 
 > 
 > You could, by being more explicit than usual:
 > 
 > > foo :<= forall b => C a => a -> b
 > 
I forgot to add:

and this is exactly where problems begin,
since, besides harmless cases like

> foo :: forall b => C int => int -> b

you also want to allow instantiations such as

> foo :: forall b => C [b] => [b] -> b

Therefore

 * either the ``metavariables'' approach makes sense,
   where metavariables are context holes, and
   we allow context substitution

 * or we allow second order substitution.

In this second case, we have to distinguish between

> foo1 :<= forall b => C a => a -> b

which allows

> foo1 :: forall b => C int => int -> b

but not

> foo1 :: forall b => C [b] => [b] -> b

and

> foo2 :<= forall b => C (a b) => (a b) -> b

which allows both of


> foo2 :: forall b => C int => int -> b

and

> foo2 :: forall b => C [b] => [b] -> b


This second approach seems to be preferrable to me.


Wolfram



Re: Partial Type Declarations

1999-01-16 Thread Wolfram Kahl



Starting from Jeffrey R. Lewis' <[EMAIL PROTECTED]> wish to
let partial type declarations express binding of SOME type variables

 >  >   foo :: C a => a -> _b

and modulating the syntax proposed by Claus Reinke <[EMAIL PROTECTED]>,

 >  >   foo :<= C a => a -> b

I suggested the following notation to express this desire:

> foo2 :<= forall b => C (a b) => (a b) -> b


This can also be seen as syntactic sugar
for an assertion that can be checked at compile-time
(a type judgement assertion):

> assert (exists a => foo2 :: forall b => C (a b) => (a b) -> b)

This way we get even closer to existing syntax,
although it is of course a departure from
the original intention of Koen Claessen <[EMAIL PROTECTED]>
to hide subtleties of Haskell's type system from DSL users.

So building on such an explicit assertion syntax,
we might easily adapt some variation of any of the proposals as syntactic sugar.


Nevertheless, I think that the distiction between

> assert (exists a => foo1 :: forall b => C a => a -> b)

and

> assert (exists a => foo2 :: forall b => C (a b) => (a b) -> b)

should not be blurred.

Am I too puristic here?



Cheers,

Wolfram



Re: Partial Type Declarations

1999-01-16 Thread Wolfram Kahl

Jeffrey R. Lewis" <[EMAIL PROTECTED]> wrote:
 > 
 > Anyway, the only thing missing now in the above proposal
 > is a similar flexibility with contexts.
 > Say, you want `b' to be a bound, and thus use :<=,
 > but you want the context to be exact
 > (i.e. you don't want extra context elements to be allowed).
 > I can't think of any particularly compelling notation for
 > "the context is `C a' and no more".
 > In this case, the combination of allowing `...' to extend contexts,
 > and _a (or similar annotation) for unquantified type variables seems
 > more natural.

Getting more and more explicit,
we may well distinguish between

> assert (exists a :: * -> *,
>D :: * -> Context
>E :: * -> Context
>F :: Context
> => 
>foo3 :: (forall b :: *
> =>
> C (a b), D (a b), E b, F
> =>
> (a b) -> b))

and

> assert (exists a :: * -> *,
>D :: * -> Context
>E :: * -> Context
> => 
>foo3 :: (forall b :: *
> =>
> C (a b), D (a b), E b
> =>
> (a b) -> b))

or any other specialisations.

Now the DSL user really gets lost,
and we should consult Simon Marlow
for an appropriate layout rule for types ;-)


But the expressivity is ours ;-)


Have fun,

Wolfram



IORefs in Ord

2000-02-16 Thread Wolfram Kahl


On glasgow-haskell-users, Simon Peyton-Jones <[EMAIL PROTECTED]>
answered my question arising from a different thread:
 >
 > | This is something that I have long been wondering about
 > | (perhaps it is just because of my ignorance):
 > | Wouldn't stable pointers be a cheaper and more appropriate means
 > | to get Ord for MVars, STRefs, and IORefs?
 >
 > Could be -- but do we really want to clog up the stable-pointer table
 > with an entry for every MVar, whether or not anyone is interested in
 > ordering?

To this I replied after some motivation:

I would already be happy if IORefs, STRefs and MVars came with a variant
in Ord (consider this as a concrete proposal for the standard library)
--- even if some implementations choose to implement that via the
Integer trick: hopefully the best implementations
would provide something faster ;-)

This was not explicit enough ---  Simon replied correctly:

 > Are you saying that adding an Integer
 > to each graph node slows things down unacceptably?  If we add a unique
 > Id to every IORef, that will slow down every IORef in a similar way.
 > I'm still having trouble getting clear why it's a good thing to 
 > tie together these unique stamps with IORefs.
 > 
Now, adding the unique ID is slower than having integer indices
instead of IORefs as access information;
with the integers as array indices, however, I do
memory management myself.

I really want neither of the two.

My thought was that using stable pointers instead of unstable pointers
for IORefs should provide a cheap Ord (plain low-level pointer comparison)
without memory or access time penalties.
But perhaps I do not know enough about stable pointers...

If this is too expensive to do for all IORefs,
then I am happy with an additional type IOORefs that provides
the same interface as IORefs (put that in a class!), plus Ord.
And this Ord would be eefficiently implemented via stable pointers
in the best implementations, and less efficiently via unique tags in the
other implementations ;-)
Then I would use IOORefs only where necessary.

What kinds of costs are incurred by the stable pointer table?

And if Ord sounds too Unsafe in this context,
then I would be happy with any Mappable class that
suffices as precondition for efficient FiniteMap-like data types.
(Internally equivalent to Ord, but not visible to the end user.)


Wolfram



lines --- to derive, or not to derive ;-)

2000-05-03 Thread Wolfram Kahl


Currently I teach functional programming classes again,
and I wanted to present the derivation of ``lines''
that can be found in [Bird-Wadler-1988].
However, since I wanted to spare my students the confusion of
a non-prelude-equivalent definition of ``unlines'',
I re-did the development of ``lines'' from the current
prelude definition of ``unlines'', therewith in fact solving
exercise 4.3.3. of that book :-).

To my surprise,
the prelude definition of lines:

> lines   :: String -> [String]
> lines "" = []
> lines s  = let (l, s') = break (== '\n') s
>in  l : case s' of
> []  -> []
> (_:s'') -> lines s''

(for which I have not yet seen a derivation,
 and which is actually used in hugs, ghc, and hbc
 although the report allows more efficient implementations)
not only is not much less cryptic than the definition I arrived at
(and anybody would arrive at):

> lines :: String -> [String]
> lines = foldr f []
>  where
>   f '\n' xss = "" : xss
>   f x [] = [[x]]
>   f x (ys:yss) = (x:ys) : yss

but it is also much less efficient:

Here are the best running times (out of ten runs) of

> main = interact (unlines . lines)

on a 1.4Mb source file using

 *  first the prelude definition,
 *  then the new definition,
 *  then the new definition with foldr expanded
(the fact that this is faster should be an
 urgent challenge to compiler writers),
 *  and finally even with f expanded: 

   ghc-4.06 -O|   hbc-0..5b -O
   realuser   |   realuser
  PrelLines:  1.772s  1.660s  |  2.615s  2.330s
  NewLines:   1.340s  1.220s  |  1.428s  1.280s
  NewLines2:  1.204s  1.080s  |  1.361s  1.210s
  NewLines3:  1.190s  1.060s  |  1.291s  1.140s

Since everybody on the Haskell committee knows [Bird-Wadler-1988] :-),
I can only conclude that the prelude definition of ``lines''
that is contained in the report is a typo (TM) ;-),
especially since it has been stated repeatedly on the Haskell list
that prelude definitions in the report should also serve as
examples of good coding practices.


The picture is essentially the same for ``words'',
where I would propose the following definition:

> words :: String -> [String]
> words [] = []
> words (x:xs)
>  | Char.isSpace x  =  words1 xs
>  | otherwise   =  case words xs of
> []  ->  [[x]]
> (ys:yss) -> (x:ys):yss
>  where
>   words1 [] = []
>   words1 xs@(y:ys)
>| Char.isSpace y  =  words1 ys
>| otherwise   =  [] : case words ys of
>[]  ->  [[y]]
>(zs:zss) -> (y:zs):zss

All the details can be found at:

http://ist.unibw-muenchen.de/Lectures/FT2000/FP/Lines.html


Best regards,

Wolfram Kahl




Re: lines --- to derive, or not to derive ;-)

2000-05-10 Thread Wolfram Kahl


Shin-Cheng Mu <[EMAIL PROTECTED]>
is puzzled why the derived foldr version of lines
is significantly faster than the prelude version,
which he recognises as an unfold:

 > I am curious why the Prelude version is less efficient than the 
 > your fold version? It seems to me the fold version construct a 
 > cons cell and deconstruct it immedately everytime it glues a 
 > character to the first line, while the Prelude version only
 > wastes a pair cell for every line.

I am surprised that none of the experts has taken this on.


So I have experimented a little bit further,
first assuming that ``break (== '\n')'' might be a lot slower
than ``span (/= '\n')'' or even ``span ('\n' /=)'' --
but the running time differences are small.

So here is my --- maybe superficial --- conclusion,
somewhat illuminated by the animations
now to be found on my ``lines page'' at:

http://ist.unibw-muenchen.de/Lectures/FT2000/FP/Lines.html

 * In the unfold definition,
   the pair cell is constructed and destructed for every character,
   and every construction of the pair cell
   induces two destructive accesses,
   via different evaluation paths, and at different times

 * In the fold definition,
   the cons cell is destructed locally in one step.

This is also illustrated by the fact that the HOPS animations
for an optimised version of the prelude definition
have essentially just one step more per character of the input string
than those for the fold definition.


Does this sound plausible to the experts?


Wolfram




Re: lines --- to derive, or not to derive ;-)

2000-05-11 Thread Wolfram Kahl


Simon Marlow writes:

 >  You didn't mention the accumulating parameter version:

[[[with correction pointed out by Koen Claessen:]]]

> lines :: String -> [String]
> lines s = lines' s ""
>   where
>   lines' []   acc = [reverse acc]
>   lines' ('\n':s) acc = reverse acc : lines' s ""
>   lines' (c:s)acc = lines' s (c:acc)

 > This one is more than twice as fast as the foldr version, despite the fact
 > that it needs to reverse the accumulating parameter for each line.

According to my measurements,
with HBC it is twice as fast as the prelude version,
and on the whole it is slightly faster then the foldr version.

However, it can still be sped up: Just use the old trick of
avoiding reverse by using a functional accumulator:

> lines :: String -> [String]
> lines s = lines' s id
>   where
>   lines' []   acc = [acc []]
>   lines' ('\n':s) acc = acc [] : lines' s id
>   lines' (c:s)acc = lines' s (acc . (c:))

(Interestingly, this is the only optimisation that HBC
 does not gratefully take advantage of.)


As Koen Claessen pointed out furthermore,
neither this nor my foldr definition are fully lazy.
I produced a first lazified variant of my foldr definition,
and it has running times similar to those of the prelude variants.


All the stuff on is on the updated page:

http://ist.unibw-muenchen.de/Lectures/FT2000/FP/Lines.html


So the interesting question is whether we can get laziness
without these continuously reconstructed pairs.



Cheers,

Wolfram




Editor Combinators

2000-06-21 Thread Wolfram Kahl



Editor Combinators
==

   http://ist.unibw-muenchen.de/EdComb/


Editor combinators allow to assemble structure editors compositionally
instead of generating them from language descriptions,
just as parsing combinators allow to assemble parsers compositionally
instead of employing parser generators to generate parsers from
grammar descriptions.

We have put together a first, simple editor combinator library in Haskell,
connected it to the GUI toolkit FranTk,
and described and documented the editor combinator library
in a technical report, all available from the EdComb home page at URL:

http://ist.unibw-muenchen.de/EdComb/


Best regards,

Wolfram Kahl




2nd CFP: RelMiS 2001

2001-01-02 Thread Wolfram Kahl


[please post.  apologies for multiple copies]


SECOND CALL FOR PAPERS


RelMiS 2001 - Relational Methods in Software


7-8 April 2001, Genova, Italy

 http://ist.unibw-muenchen.de/RelMiS/

   A Satellite Event to ETAPS 2001


Important Dates
===

Deadline for submission:10 January  2001
Notification of acceptance:  9 February 2001
Final version due:  28 February 2001
Workshop dates:7-8 April2001


Workshop Topics
===

* Relational Specifications and Modelling:
 methods and tools, tabular methods, abstract data types
* Relational Software Design and Development Techniques:
 relational refinement, heuristic approaches for derivation, correctness
 considerations, dynamic programming, greedy algorithms, catamorphisms,
 paramorphisms, hylomorphisms and related topics
* Programming with Relations:
 prototyping, testing, fault tolerance, information systems, information
 coding
* Implementing relational algebra with mixed representation of relations
* Handling of Large Relations:
 problems of scale, innovative representations, distributed
 implementation


Submissions
===

Submissions will be evaluated by the Program Committee for inclusion in the
proceedings, which will be published in the ENTCS series. Papers must
contain original contributions, be clearly written, and include appropriate
reference to and comparison with related work. Papers should be submitted
electronically as uuencoded PostScript files at the address
[EMAIL PROTECTED] Preference will be given to papers that are no
shorter than 10 and no longer than 15 pages. A separate message should also
be sent, with a text-only one-page abstract and with mailing addresses (both
postal and electronic), telephone number and fax number of the corresponding
author.

Final versions will have to be submitted as LaTeX source and have to adhere
to the ENTCS style!


Programme Committee
===

Rudolf Berghammer (Kiel), Jules Desharnais (Quebec), Wolfram Kahl (Munich),
David L. Parnas (Hamilton), Gunther Schmidt (Munich)


-

E-Mail: [EMAIL PROTECTED]
Workshop home page: URL: http://ist.unibw-muenchen.de/RelMiS/


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell