Re: How does GHC's testsuite work?

2017-10-31 Thread Wolfram Kahl
Cher Sébastien,

> Thanks. Unfortunately, the paper has been written in French.

No need to add ``Unfortunately''...  ;-)


> Will come back with a link ASAP!

I guess that many will agree with my opinion:
Links to French papers are welcome, too!


Amicalement,

Wolfram
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: simultaneous ghc versions

2015-07-31 Thread Wolfram Kahl
On Sat, Aug 01, 2015 at 06:31:38AM +1000, Erik de Castro Lopo wrote:
 I maintaing multiple versions of GHC on all the machines I use regularly
 for Haskell development. I have:
 
 * ghc-7.6.3 installed under /usr/lib/ghc-7.6/
 * ghc-7.8.4 installed under /usr/lib/ghc-7.8/
 * ghc-7.10.2 installed under /usr/lib/ghc-7.10/
 
 To switch between versions all I need to do is modify $PATH
 to remove say /usr/lib/ghc-7.6/bin and add /usr/lib/ghc-7.10/bin.
 This lets me have two terminal window side by side with different
 versions of GHC.

I use essentially the same setup,
but found that cabal-install does not play nicely with this:
No matter under which prefix I install cabal-install,
it always assumes the same global path for its configuration file.
This is a nuisance in particular when for some reason different versions
of GHC need different versions of cabal-install --- apparently older GHCs
don't work with newer versions of cabal-install?

I did experiment with cabal-install's -w and sandboxes,
and agree with what was previously mentioned:
It only helps if you never use GHC without cabal-install.

It would be nice if cabal-install had an installation-time option
to set its prefix directory FOR EVERYTHING.
The best cludge I found was scripts /usr/lib/ghc-*/bin/mycabal
that call /usr/lib/ghc-*/bin/cabal with appropriate options...


Wolfram
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


[Haskell] Relational Algebraic Methods --- RAMiCS 2014 --- Final CFP, with Deadlines extended!

2013-10-26 Thread Wolfram Kahl
 (NICTA, Australia; Publicity chair)
Ali Jaoua(Doha, Qatar)
Peter Jipsen (Chapman U., USA; PC co-chair)
Wolfram Kahl (McMaster U., Canada; PC co-chair)
Tadeusz Litak(Erlangen, Germany)
Larissa Meinicke (U. Queensland, Australia)
Szabolcs Mikulas (London, UK)
Bernhard Möller  (Augsburg, Germany)
Martin E. Müller (St. Augustin, Germany; General chair)
José Oliveira(U. Minho, Portugal)
Ewa Orłowska (Warsaw, Poland)
Matthew Parkinson(Microsoft Research, UK)
Damien Pous  (CNRS, France)
Ingrid Rewitzky  (Stellenbosch, South Africa)
Holger Schlingloff   (Berlin, Germany)
Gunther Schmidt  (Munich, Germany)
Renate Schmidt   (Manchester, UK)
Georg Struth (Sheffield, UK)
George Theodorakopoulos  (Cardiff, UK)
Michael Winter   (Brock U., Canada)



Steering Committee
--

Rudolf Berghammer  (Kiel, Germany)
Jules Desharnais   (Laval U., Canada)
Harrie de Swart(Rotterdam, Netherlands)
Ali Jaoua  (Doha, Qatar)
Bernhard Möller(Augsburg, Germany)
Ewa Orłowska   (Warsaw, Poland)
Gunther Schmidt(Munich, Germany)
Renate Schmidt (Manchester, UK)
Michael Winter (Brock U., Canada)



Organising Committee


Martin E. Müller, Sankt Augustin, Germany: Conference Chair, Local Organiser
Peter Höfner, NICTA, Australia: Publicity
Peter Jipsen, Chapman U., USA: PC Co-Chair
Wolfram Kahl, McMaster U., Canada: PC Co-Chair

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: base package (Was: GHC 7.8 release?)

2013-02-15 Thread Wolfram Kahl
On Thu, Feb 14, 2013 at 03:48:51PM +0100, Joachim Breitner wrote:
 
 Yesterday, I experimented a bit with base’s code, [...]

 Maybe the proper is to reverse the whole approach: Leave base as it is,
 and then build re-exporting smaller packages (e.g. a base-pure) on top
 of it. The advantage is:
   * No need to rewrite the tightly intertwined base.
   * Libraries still have the option to have tighter dependencies.
   * Base can evolve with lots of breaking changes, as long as they
 do not affect the API by the smaller packages.
   * Development of this collection can happen outside the GHC tree.
   * Alternative implementations for (some of) these packages can be
 created, if the reason why they could not be moved out of base
 is one of implementation, not of API
 
 How does that sound?

Essentially good to me...


One might consider instead (as has been proposed before, I believe),
to rename the current ``base'' to something like ``ghc-base''
which is not intended to be depended on by packages not shipped with GHC
(that is, by default ``hidden'' in ghc-pkg), and instead export:
  base   with a very stable interface
  io with a very stable interface
  GHCwith a probably rapidly evolving interface.
  *  possibly other packages giving access to internals

Most packages that currently depend on ``base'' would then depend
only on ``base'' and possibly ``io'', and by virtue of the stability
of these two interfaces would therefore not be affected
by most GHC releases.

This would effectively be
   ``splitting the interfaces GHC and io out from base''
instead of
   ``deprecating base and replacing it with the three new interfaces
 base-pure, io, and GHC''.

That choice is possibly mostly a matter of taste ---
I think that the name ``base'' is good for a user-facing interface,
and the name ``ghc-base'' more indicative of its
implementation-dependent character.


Wolfram

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


+RTS -S heap reporting oddity

2012-08-17 Thread Wolfram Kahl
During one of my long Agda runs (with GHC-7.4.2), I observed the following
output, with run-time options

   +RTS -S -H11G -M11G -K256M

:

7694558208  30623864 3833166176  0.11  0.11  234.75  234.7900  (Gen:  0)
7678904688  29295168 3847737784  0.11  0.11  242.04  242.0900  (Gen:  0)
7662481840  29195736 3861451856  0.11  0.11  249.31  249.3500  (Gen:  0)
7647989280  26482704 3872463688  0.12  0.12  256.64  256.6800  (Gen:  0)
4609865360  25764016 3886000448  0.09  0.09  261.04  261.0900  (Gen:  0)
4581294920  19435032 3891512272  0.07  0.07  265.37  265.4200  (Gen:  0)
4568757088  21095864 3902286000  0.08  0.08  269.70  269.7400  (Gen:  0)
4546421608  21618856 3913923976  0.09  0.09  274.04  274.0900  (Gen:  0)
452151 2894668056 3484748224  7.63  7.63  285.94  285.9800  (Gen:  
1)
8085358392  23776128 3499185336  0.11  0.11  293.49  293.5300  (Gen:  0)
8064630856  32055112 3515876576  0.13  0.13  300.91  300.9500  (Gen:  0)
8040500112  31477608 3528105088  0.12  0.12  308.37  308.4100  (Gen:  0)
8031456296  29641328 3540632456  0.11  0.11  315.83  315.8700  (Gen:  0)
8018447264  30187208 3554339600  0.12  0.12  323.26  323.3100  (Gen:  0)

To my untrained eye, this seems to be saying the following:
In the first 4 lines, the heap runs (almost) full before (minor) collections.
In lines 5 to 9 it apparently leaves 3G empty before collection,
but ``those 3G'' then appear on line 9 in the ``amount of data copied during 
(major) collection''
column, and after that it runs up to fill all 11G again before the next few 
minor collections.

What is really going on here?
(Previously I had never seen such big numbers in the second column on major 
collections.)


Wolfram


P.S.: Same effect again, but more dramatic, later during the same Agda run:

448829488   4864536 5710435424  0.02  0.02 1422.80 1422.9000  (Gen:  0)
445544064   3251712 5710248752  0.01  0.01 1423.23 1423.3200  (Gen:  0)
450236784   4148864 5712696848  0.02  0.02 1423.68 1423.7700  (Gen:  0)
445240152   3828120 5713606328  0.02  0.02 1424.10 1424.1900  (Gen:  0)
443285616   5906448 5717731864  0.02  0.02 1424.52 1424.6100  (Gen:  0)
430698248 4773500032 5363214440  9.30  9.30 1434.21 1434.3000  (Gen:  1)
6148455592  13490304 5374609848  0.07  0.07 1439.83 1439.9200  (Gen:  0)
6185350848  27419744 5389326896  0.11  0.11 1445.50 1445.5900  (Gen:  0)
6168805736  23069072 5398725784  0.11  0.11 1451.22 1451.3200  (Gen:  0)
6157744328  23451872 5408370152  0.09  0.09 1456.93 1457.0300  (Gen:  0)
6151715272  25739584 5421044592  0.11  0.11 1462.62 1462.7200  (Gen:  0)
6132589488  24541688 5428809632  0.10  0.10 1468.26 1468.3700  (Gen:  0)

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Call to arms: lambda-case is stuck and needs your help

2012-07-09 Thread Wolfram Kahl
 for clunky...

For argument supply and ``into'' we use the opposite sequence,
``arg  into  pat'', and clunky could then become:

  clunky = caseof
env -
  var1 -
var2 -
  lookup env var1  into
Just val1 -
  lookup env var2  into
Just val2 - val1 + val2
... other case bindings for clunky 

(This runs into similar nested layout issues as discussed recently;
 if we were to change the layout rule (perhaps only for the new keywords)
 to not apply if the next token is separated from the layout keyword
 by only a single space, we could turn this into a more compact shape:

  clunky = caseof
env - var1 - var2 - lookup env var1  into Just val1
- lookup env var2  into Just val2 - val1 + val2
... other case bindings for clunky 

 (The alignment of the two lookups and their preceding - is irrelevant here.)
)

``into'' would generalise pattern guards by allowing the same expression
to be matched against different patterns and still enable fall-through:

  f = caseof
[x] - g x  into  [] - []
  a : b : bs - (a, b) : h bs
... other cases, including g returning a singleton ...


This could be:

  Proposal 4: ``case-group argument supply'' or ``generalised pattern guards''



Best wishes,

Wolfram


-


@InProceedings{Kahl-2004a,
  author =   {Wolfram Kahl},
  title ={Basic Pattern Matching Calculi: A Fresh View on Matching 
Failure},
  crossref =  {FLOPS2004},
  pages ={276--290},
  DOI = {10.1007/978-3-540-24754-8_20},
  SpringerURL = {http://www.springerlink.com/content/3jet4qgw1q2nu0a8/},
  abstract = {We propose pattern matching calculi as a refinement
  of $\lambda$-calculus that integrates mechanisms appropriate for
  fine-grained modelling of non-strict pattern matching.

  Compared with the functional rewriting strategy
  usually employed to define the operational semantics of
  pattern matching in non-strict functional programming languages
  like Haskell or Clean, our pattern matching calculi
  achieve the same effects using simpler and more local rules.

  The main device is to embed into expressions
  the separate syntactic category of matchings;
  the resulting language naturally encompasses
  pattern guards and Boolean guards as special cases.

  By allowing a confluent reduction system and a normalising strategy,
  these pattern matching calculi provide a new basis for
  operational semantics of non-strict programming languages
  and also for implementations.}
}

@InProceedings{Kahl-Carette-Ji-2006a,
  author =   {Wolfram Kahl and Jacques Carette and Xiaoheng Ji},
  title ={Bimonadic Semantics for Basic Pattern Matching Calculi},
  crossref =  {MPC2006},
  pages ={253--273},
  DOI = {10.1007/11783596_16},
  SpringerURL = {http://www.springerlink.com/content/2715070606u63648/}
}

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ghc-cabal-Random

2011-12-31 Thread Wolfram Kahl
On Sat, Dec 31, 2011 at 11:43:26PM +0200, Yitzchak Gale wrote:
 Serge D. Mechveliani wrote:
  I have  ghc-7.4.0.20111219  made from source and tested it on the
  DoCon-2.12 application -- thanks to people for their help!
  It looks all right.
  This was -- with skipping the module Random
  Now it remains to add the Random package.
  I have taken  AC-Random Version 0.1  from hackage...
  Its installation requires Cabal..
  And Cabal is difficult to install..
 
 [...]

 Today, it is very unusual to use GHC by itself.
 To use Haskell, you install the Haskell Platform.
 That is GHC together with Cabal and a basic
 set of libraries. It is very easy to install.
 
 http://hackage.haskell.org/platform/

However, since you are willing and able to test bleeding-edge versions of GHC,
you need to be able to live without the platform, which typically
catches up to GHC versions only within a couple of months.

 Almost all Haskell software is expected to
 be installed using Cabal nowadays.

It is important to know that people associate two packages with the name 
``Cabal'':

 * cabal : package infrastructure shipped with GHC ---
   only a library.
 * cabal-install : package manager requiring a number of other packages
 (in particular networking packages), and providing the executable ``cabal''

Life without cabal-install is not only possible,
but also safer.
(See also: http://www.vex.net/~trebla/haskell/sicp.xhtml )

If you installed an experimental GHC version,
it makes sense to install packages into the same directory,
say /usr/local/packages/ghc-7.4.0.20111219.

Assuming you downloaded AC-Random-0.1.tar.gz, do the following:

  tar xzf AC-Random-0.1.tar.gz
  cd AC-Random-0.1.tar.gz
  ghc --make Setup
  ./Setup configure --prefix=/usr/local/packages/ghc-7.4.0.20111219 -p
  ./Setup build -v
  ./Setup haddock
  ./Setup install -v


Hope this helps!


Wolfram

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Recompile with -fPIC (was building a patched ghc)

2011-02-18 Thread Wolfram Kahl
I am not certain, but this may be the same problem that I once had,
and that was solved by updating to binutils-2.20.

 ld --version
GNU ld (GNU Binutils) 2.20.1.20100303


Wolfram


On Fri, Feb 18, 2011 at 11:34:03AM +0100, José Pedro Magalhães wrote:
 Hi all,
 
 I'm getting the same error as Alexy below in some 64bit linux system. What
 can I do? Adding -fPIC and also -dynamic does not seem to solve the problem.
 Also, this only happens with a perf build; devel1 works fine.
 
 
 Thanks,
 Pedro
 
 On Sat, Jun 26, 2010 at 05:56, braver delivera...@gmail.com wrote:
 
  An attempt to build the trunk gets me this:
 
  /opt/portage/usr/lib/gcc/x86_64-pc-linux-gnu/4.2.4/../../../../x86_64-
  pc-linux-gnu/bin/ld: rts/dist/build/RtsStartup.dyn_o: relocation
  R_X86_64_PC32 against symbol `StgRun' can not be used when making a
  shared object; recompile with -fPIC
 
  -- I use prefix portage on a CentOS box, admittedly a non-standard
  setup.  Its gcc is found first and it wants -fPIC...  Should I just
  add it to CFLAGS or what?
 
  -- Alexy
  ___
  Haskell-Cafe mailing list
  haskell-c...@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 
 !DSPAM:4d5e4b2789541804284693!

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
 
 
 !DSPAM:4d5e4b2789541804284693!


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


How to activate DEBUG in Data.HashTable?

2010-11-28 Thread Wolfram Kahl
Hello,

  with a large Agda development, I have a reproducible segmentation fault
that I have been able to localise to the serialisation
(Agda.TypeChecking.Serialise.encode), which heavily relies on Data.HashTable.

Now I find that Data.HashTable (from GHC-7.0.1) has a CPP-enabled DEBUG version 
---
is there a ``proper'' way to activate this? Possibly even so that it can be 
added
ex-post to an existing installation? (So I don't have to re-install all 
packages.)

mk/ways.mk says that the ``debug'' way only affects the RTS...


Wolfram

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


2nd CFP: RelMiS 2001

2001-01-02 Thread Wolfram Kahl


[please post.  apologies for multiple copies]


SECOND CALL FOR PAPERS


RelMiS 2001 - Relational Methods in Software


7-8 April 2001, Genova, Italy

 http://ist.unibw-muenchen.de/RelMiS/

   A Satellite Event to ETAPS 2001


Important Dates
===

Deadline for submission:10 January  2001
Notification of acceptance:  9 February 2001
Final version due:  28 February 2001
Workshop dates:7-8 April2001


Workshop Topics
===

* Relational Specifications and Modelling:
 methods and tools, tabular methods, abstract data types
* Relational Software Design and Development Techniques:
 relational refinement, heuristic approaches for derivation, correctness
 considerations, dynamic programming, greedy algorithms, catamorphisms,
 paramorphisms, hylomorphisms and related topics
* Programming with Relations:
 prototyping, testing, fault tolerance, information systems, information
 coding
* Implementing relational algebra with mixed representation of relations
* Handling of Large Relations:
 problems of scale, innovative representations, distributed
 implementation


Submissions
===

Submissions will be evaluated by the Program Committee for inclusion in the
proceedings, which will be published in the ENTCS series. Papers must
contain original contributions, be clearly written, and include appropriate
reference to and comparison with related work. Papers should be submitted
electronically as uuencoded PostScript files at the address
[EMAIL PROTECTED] Preference will be given to papers that are no
shorter than 10 and no longer than 15 pages. A separate message should also
be sent, with a text-only one-page abstract and with mailing addresses (both
postal and electronic), telephone number and fax number of the corresponding
author.

Final versions will have to be submitted as LaTeX source and have to adhere
to the ENTCS style!


Programme Committee
===

Rudolf Berghammer (Kiel), Jules Desharnais (Quebec), Wolfram Kahl (Munich),
David L. Parnas (Hamilton), Gunther Schmidt (Munich)


-

E-Mail: [EMAIL PROTECTED]
Workshop home page: URL: http://ist.unibw-muenchen.de/RelMiS/


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Editor Combinators

2000-06-21 Thread Wolfram Kahl



Editor Combinators
==

   http://ist.unibw-muenchen.de/EdComb/


Editor combinators allow to assemble structure editors compositionally
instead of generating them from language descriptions,
just as parsing combinators allow to assemble parsers compositionally
instead of employing parser generators to generate parsers from
grammar descriptions.

We have put together a first, simple editor combinator library in Haskell,
connected it to the GUI toolkit FranTk,
and described and documented the editor combinator library
in a technical report, all available from the EdComb home page at URL:

http://ist.unibw-muenchen.de/EdComb/


Best regards,

Wolfram Kahl




Re: lines --- to derive, or not to derive ;-)

2000-05-11 Thread Wolfram Kahl


Simon Marlow writes:

   You didn't mention the accumulating parameter version:

[[[with correction pointed out by Koen Claessen:]]]

 lines :: String - [String]
 lines s = lines' s ""
   where
   lines' []   acc = [reverse acc]
   lines' ('\n':s) acc = reverse acc : lines' s ""
   lines' (c:s)acc = lines' s (c:acc)

  This one is more than twice as fast as the foldr version, despite the fact
  that it needs to reverse the accumulating parameter for each line.

According to my measurements,
with HBC it is twice as fast as the prelude version,
and on the whole it is slightly faster then the foldr version.

However, it can still be sped up: Just use the old trick of
avoiding reverse by using a functional accumulator:

 lines :: String - [String]
 lines s = lines' s id
   where
   lines' []   acc = [acc []]
   lines' ('\n':s) acc = acc [] : lines' s id
   lines' (c:s)acc = lines' s (acc . (c:))

(Interestingly, this is the only optimisation that HBC
 does not gratefully take advantage of.)


As Koen Claessen pointed out furthermore,
neither this nor my foldr definition are fully lazy.
I produced a first lazified variant of my foldr definition,
and it has running times similar to those of the prelude variants.


All the stuff on is on the updated page:

http://ist.unibw-muenchen.de/Lectures/FT2000/FP/Lines.html


So the interesting question is whether we can get laziness
without these continuously reconstructed pairs.



Cheers,

Wolfram




Re: lines --- to derive, or not to derive ;-)

2000-05-10 Thread Wolfram Kahl


Shin-Cheng Mu [EMAIL PROTECTED]
is puzzled why the derived foldr version of lines
is significantly faster than the prelude version,
which he recognises as an unfold:

  I am curious why the Prelude version is less efficient than the 
  your fold version? It seems to me the fold version construct a 
  cons cell and deconstruct it immedately everytime it glues a 
  character to the first line, while the Prelude version only
  wastes a pair cell for every line.

I am surprised that none of the experts has taken this on.


So I have experimented a little bit further,
first assuming that ``break (== '\n')'' might be a lot slower
than ``span (/= '\n')'' or even ``span ('\n' /=)'' --
but the running time differences are small.

So here is my --- maybe superficial --- conclusion,
somewhat illuminated by the animations
now to be found on my ``lines page'' at:

http://ist.unibw-muenchen.de/Lectures/FT2000/FP/Lines.html

 * In the unfold definition,
   the pair cell is constructed and destructed for every character,
   and every construction of the pair cell
   induces two destructive accesses,
   via different evaluation paths, and at different times

 * In the fold definition,
   the cons cell is destructed locally in one step.

This is also illustrated by the fact that the HOPS animations
for an optimised version of the prelude definition
have essentially just one step more per character of the input string
than those for the fold definition.


Does this sound plausible to the experts?


Wolfram




lines --- to derive, or not to derive ;-)

2000-05-03 Thread Wolfram Kahl


Currently I teach functional programming classes again,
and I wanted to present the derivation of ``lines''
that can be found in [Bird-Wadler-1988].
However, since I wanted to spare my students the confusion of
a non-prelude-equivalent definition of ``unlines'',
I re-did the development of ``lines'' from the current
prelude definition of ``unlines'', therewith in fact solving
exercise 4.3.3. of that book :-).

To my surprise,
the prelude definition of lines:

 lines   :: String - [String]
 lines "" = []
 lines s  = let (l, s') = break (== '\n') s
in  l : case s' of
 []  - []
 (_:s'') - lines s''

(for which I have not yet seen a derivation,
 and which is actually used in hugs, ghc, and hbc
 although the report allows more efficient implementations)
not only is not much less cryptic than the definition I arrived at
(and anybody would arrive at):

 lines :: String - [String]
 lines = foldr f []
  where
   f '\n' xss = "" : xss
   f x [] = [[x]]
   f x (ys:yss) = (x:ys) : yss

but it is also much less efficient:

Here are the best running times (out of ten runs) of

 main = interact (unlines . lines)

on a 1.4Mb source file using

 *  first the prelude definition,
 *  then the new definition,
 *  then the new definition with foldr expanded
(the fact that this is faster should be an
 urgent challenge to compiler writers),
 *  and finally even with f expanded: 

   ghc-4.06 -O|   hbc-0..5b -O
   realuser   |   realuser
  PrelLines:  1.772s  1.660s  |  2.615s  2.330s
  NewLines:   1.340s  1.220s  |  1.428s  1.280s
  NewLines2:  1.204s  1.080s  |  1.361s  1.210s
  NewLines3:  1.190s  1.060s  |  1.291s  1.140s

Since everybody on the Haskell committee knows [Bird-Wadler-1988] :-),
I can only conclude that the prelude definition of ``lines''
that is contained in the report is a typo (TM) ;-),
especially since it has been stated repeatedly on the Haskell list
that prelude definitions in the report should also serve as
examples of good coding practices.


The picture is essentially the same for ``words'',
where I would propose the following definition:

 words :: String - [String]
 words [] = []
 words (x:xs)
  | Char.isSpace x  =  words1 xs
  | otherwise   =  case words xs of
 []  -  [[x]]
 (ys:yss) - (x:ys):yss
  where
   words1 [] = []
   words1 xs@(y:ys)
| Char.isSpace y  =  words1 ys
| otherwise   =  [] : case words ys of
[]  -  [[y]]
(zs:zss) - (y:zs):zss

All the details can be found at:

http://ist.unibw-muenchen.de/Lectures/FT2000/FP/Lines.html


Best regards,

Wolfram Kahl




lines --- to derive, or not to derive ;-)

2000-05-03 Thread Wolfram Kahl


Currently I teach functional programming classes again,
and I wanted to present the derivation of ``lines''
that can be found in [Bird-Wadler-1988].
However, since I wanted to spare my students the confusion of
a non-prelude-equivalent definition of ``unlines'',
I re-did the development of ``lines'' from the current
prelude definition of ``unlines'', therewith in fact solving
exercise 4.3.3. of that book :-).

To my surprise,
the prelude definition of lines:

 lines   :: String - [String]
 lines "" = []
 lines s  = let (l, s') = break (== '\n') s
in  l : case s' of
 []  - []
 (_:s'') - lines s''

(for which I have not yet seen a derivation,
 and which is actually used in hugs, ghc, and hbc
 although the report allows more efficient implementations)
not only is not much less cryptic than the definition I arrived at
(and anybody would arrive at):

 lines :: String - [String]
 lines = foldr f []
  where
   f '\n' xss = "" : xss
   f x [] = [[x]]
   f x (ys:yss) = (x:ys) : yss

but it is also much less efficient:

Here are the best running times (out of ten runs) of

 main = interact (unlines . lines)

on a 1.4Mb source file using

 *  first the prelude definition,
 *  then the new definition,
 *  then the new definition with foldr expanded
(the fact that this is faster should be an
 urgent challenge to compiler writers),
 *  and finally even with f expanded: 

   ghc-4.06 -O|   hbc-0..5b -O
   realuser   |   realuser
  PrelLines:  1.772s  1.660s  |  2.615s  2.330s
  NewLines:   1.340s  1.220s  |  1.428s  1.280s
  NewLines2:  1.204s  1.080s  |  1.361s  1.210s
  NewLines3:  1.190s  1.060s  |  1.291s  1.140s

Since everybody on the Haskell committee knows [Bird-Wadler-1988] :-),
I can only conclude that the prelude definition of ``lines''
that is contained in the report is a typo (TM) ;-),
especially since it has been stated repeatedly on the Haskell list
that prelude definitions in the report should also serve as
examples of good coding practices.


The picture is essentially the same for ``words'',
where I would propose the following definition:

 words :: String - [String]
 words [] = []
 words (x:xs)
  | Char.isSpace x  =  words1 xs
  | otherwise   =  case words xs of
 []  -  [[x]]
 (ys:yss) - (x:ys):yss
  where
   words1 [] = []
   words1 xs@(y:ys)
| Char.isSpace y  =  words1 ys
| otherwise   =  [] : case words ys of
[]  -  [[y]]
(zs:zss) - (y:zs):zss

All the details can be found at:

http://ist.unibw-muenchen.de/Lectures/FT2000/FP/Lines.html


Best regards,

Wolfram Kahl




Re: Trivial Haskell Concurrency problem

2000-02-15 Thread Wolfram Kahl


Simon Peyton-Jones [EMAIL PROTECTED] writes:
  
  | elegant.  If MVar's were instances of Ord as well as Eq, a 
  | neat solution would
  | be to always get the least MVar first, but they aren't.  So 
  | what should one do?
  
  But you could make Flag an instance of Ord
  
   data Flag = MkFlag Int (MVar Bool)
  
  Now newMVar needs to consult a global variable to get the
  next Flag number, but after that there's no global locking.
  
  This is, of course, precisely what we'd have to do to make
  MVars an instance of Ord --- but it would impose a cost on
  all MVars, whether or not they needed it, which is why we've not
  done it.
  
This is something that I have long been wondering about
(perhaps it is just because of my ignorance):
Wouldn't stable pointers be a cheaper and more appropriate means
to get Ord for MVars, STRefs, and IORefs?


Best regards,

Wolfram



Re: Trivial Haskell Concurrency problem

2000-02-15 Thread Wolfram Kahl

Simon Peyton-Jones [EMAIL PROTECTED] answers my question:
  
  | This is something that I have long been wondering about
  | (perhaps it is just because of my ignorance):
  | Wouldn't stable pointers be a cheaper and more appropriate means
  | to get Ord for MVars, STRefs, and IORefs?
  
  Could be -- but do we really want to clog up the stable-pointer table
  with an entry for every MVar, whether or not anyone is interested in
  ordering?
  
  I think what you want is a distributed way to get a unique,
  as George suggested.  Then you can pair that with an MVar when
  you want something comparable.  The unique can include the processor
  id, so it can be globally unique.  64 bits?
  
  I'm still leery of putting such a unique inside every MVar, IORef etc.
  But maybe I shouldn't worry.
  

Perhaps I should give some background:
I am interested in implementing graph structures,
and would need to handle mappings between graphs,
or node labellings, or whatever.
All these mappings need not reside in the graph itself,
so they would require some FiniteMap structure.
However, most decent such data types require Ord
for being able to work efficiently.
If IORefs (or whatever I use) are not ordered,
then I have essentially two possibilities:

1) Do the Integer trick: slows down my program
   (as actually experienced in OCaml)

2) Do the memory management myself by allocating
   huge arrays and using the indices which are in Ord: clumsy and unwieldy

So  I would already be happy if IORefs, STRefs and MVars came with a variant
in Ord (consider this as a concrete proposal for the standard library)
--- even if some implementations choose to implement that via the
Integer trick: hopefully the best implementations
would provide something faster ;-)


Best regards,

Wolfram




Re: graphs

2000-02-01 Thread Wolfram Kahl


Wojciech Moczydlowski, Jr [EMAIL PROTECTED] (Khaliff TM) wrote:
  
  The second question - does anybody know about a GHC/Haskell library with
  graphs implementation?

Depending on what precisely you need, Martin Erwig's
``Functional Graph Library'' might contain something useful for you:

http://www.informatik.fernuni-hagen.de/import/pi4/erwig/fgl/



Wolfram



Re: Referential Transparency

1999-10-08 Thread Wolfram Kahl


  
   | As far as I understood the matter, referential transparency
   | denotes the property of a language, in which variables of the
   | same scope do not change their value. 
  
  So ML and Erlang are referentially transparent too?

Before everybody gets completely muddled up,
I point to a page where I conserve an old USENET NEWS posting
by Tom DeBoni [EMAIL PROTECTED] quoting the definitions
from a few relevant books:

http://www2.informatik.unibw-muenchen.de/kahl/reftrans.html


Regards,

Wolfram






Re: Wanted: a Business Programming Language!

1999-09-21 Thread Wolfram Kahl


J.P. Morrison [EMAIL PROTECTED] writes on the dataflow list:

  
   I have been in the IT business for about 40 years, and have been
maintaining PL/I programs intensively for the last year or so.  In 1994
I wrote a book called "Flow Based Programming" which was quite well
received.  It describes a technique for developing big applications as
networks of asynchronous processes communicating via streams of
formatted data chunks.  In fact this technique has been in continuous
use for the last 30 years at various places I have worked (surprise!),
and I feel it has proven its usefulness.  It also leads very naturally
to the concept of mini-languages, where a given process can be written
in a language appropriate to the application area.  However, designing
such a language is non-obvious!  As a result of my work, I feel pretty
sure that both variables and constants are undesirable - variables
because they are very sensitive to the timing of events, and constants
because they usually aren't!
  
Now, even if we replace variables by entity attributes, in most
languages I am familiar with the coding schemes are usually mathematical
- i.e. most variables are either binary, packed decimal, BCD, etc.  What
I am looking for is a language where most values have dimension, which
constrains the operations that can be performed on them.  E.g.
  
- currency can be added, but not multiplied
- currency can be multiplied or divided by non-dimensioned quantities,
but not added to them
- distances can be added; if multiplied, you get area; multiply again,
and you get volume!
- dates can be subtracted, but not added; number of days can be added or
substracted to/from a date
- etc.
  
Further, dimensioned quantities are measured in units, so it would be
nice to be able to add miles to kilometers - I don't care what unit is
used for the result, so long as it is identified.  Currency conversions
are more problematic, as a) the rate changes frequently, and b) there is
often a charge, so some amount of cash has to be sent somewhere else!
  
If your answer is OOP, it must be symmetrical (the permitted operations
must depend on the types of all variables - in which case, how do you
manage all the combinations?) - the popular OOP languages are
asymmetrical, which is not acceptable.

Multi-parameter type classes in Haskell.

Haskell:

http://www.haskell.org/

GHC User Manual --- Section about Multi-parameter type classes:

http://www.haskell.org/ghc/docs/latest/users_guide/users_guide-5.html#ss5.5

Paper:

http://www.dcs.gla.ac.uk/~simonpj/multi.ps.gz

Also, the language must be
natural to non-programmers - business types ought to be able to express
business rules without having to learn some abstruse notation...
  

Well, Haskell looks more like a kind of mathematical notation than
the kind of programming language (C, COBOL, Java) most non-programmers
are used to consider as non-natural, so it might fit your ideas ;-)

More seriously:

There has been quite some research about using Haskell as background
for domain-specific languages, so the the approach to follow would
require some real Haskell programming to set up the necessary classes
and combinators, and then business-type sticking-together
of these building blocks.

See the home page of Paul Hudak for DSL work:

http://www.cs.yale.edu/~hudak.html


A recent thread on the Haskell mailing list discussed units:

Search for ``Units'' in:

http://www.dcs.gla.ac.uk/mail-www/haskell/


This message also goes to the Haskell list.


Best regards,

Wolfram





Re: The dreaded layout rule

1999-07-30 Thread Wolfram Kahl

Simon Peyton-Jones [EMAIL PROTECTED] writes:
  
   In other words, it is a bug (and GHC and Hugs don't do it
   right - see my previous message; from your comment, I
   presume HBC also doesn't follow the definition).  I think,
   the only Right Thing is to remove this awful rule (unless
   somebody comes up with a rule that can be decided locally).
  
  Maybe so.  But (H98 editors hat on) this is more than a "typo".

I am surprised!  ;-)  

  It's a Haskell 2  issue.  Perhaps there will be no fully conforming 
  H98 compilers!

Perhaps it would be a reasonable Haskell 1.6 issue?


Wolfram





Re: diagonalisation

1999-07-20 Thread Wolfram Kahl

Hi all,

  since I have gotten into the habit to relate proposed diagonalisation
function, I will not resist this time, either ;-)

Koen Claessen [EMAIL PROTECTED] writes:
  
as // bs = diag [] [ [ (a,b) | a - as] | b - bs ]
  
diag current []  = diag [] current
diag current (new : pending) = heads ++ diag (new : tails) pending
 where
  (heads,tails) = unzip [ (c,cs) | (c:cs) - current ]
  
  I thought it was short and sweet :-)

It is.  At a price, though:

Just as my first versions, it does not terminate in finite cases; try:
  
   take 20 (diag [] [[3..],[1,2]])

Otherwise, it is in the class of ``upside-down diagonals'':

Main take 20 (diag [] [[(x,y) | y - [0..] ] | x - [0..]])
[(0,0),(1,0),(0,1),(2,0),(1,1),(0,2),(3,0),(2,1),(1,2),(0,3)
,(4,0),(3,1),(2,2),(1,3),(0,4),(5,0),(4,1),(3,2),(2,3),(1,4)]

and is very close to the variant of Jón Fairbairn:

 diag:: [[a]] - [a]
 diag l = d [] l

 d [] [] = []
 d acc [] = --  d [] acc would do, but muddles the order;
heads acc ++ d (rests acc) []
 d ls (l1:rest) = heads (ls') ++ d (rests ls') rest
  where ls' = l1: ls

 heads l = [a | (a: _) - l]
 rests l = [as | (_: as) - l]

The second line of Jón Fairbairn's ``d'' handles termination;
this is what is missing in Koen Claessen's solution.
Koen Claessen's ``new'' is analysed one round earlier
in Jón Fairbairn's solution (and also in my corresponding solution DiagWK5).

The pairs in Koen Claessen's solution seem to be more expensive than
the double anaysis in Jón Fairbairn's solution:


   220  200  500

DiagKC0:00.14  0:01.73  0:24.27  1:05.41
DiagJF0:00.13  0:01.64  0:21.85  0:57.99
DiagWK5   0:00.12  0:01.34  0:18.92  0:52.06


The infinite dimensional problem is also quite fun --- I think I shall try
to accumulate some more interesting stuff in this direction and then
set out to condense it in some form suitable for publication,
perhaps even as a Functional Programming Pearl.


Wolfram





Re: instance overlapping

1999-07-18 Thread Wolfram Kahl

Nguyen Phan Dung [EMAIL PROTECTED] writes:
  
  If I have the following declaration:
  
  class Soup where
  ...
  
  instance Soup String where
  ...
  
  instance Soup t = Soup [t] where
  ...
  
  This will lead to an error: "instance overlapping".
  
  Is there anyway to solve this? 
  (I could not make an instance for Char)

One possibility is

 newtype MyString = MyString String

 instance Soup MyString where...



For another possibility let us assume:

 class Soup a where
   soupmethod :: souptype a

 instance Soup String where
   soupmethod = stringsoupmethod

 instance Soup t = Soup [t] where
   soupmethod = souplistfunctor (soupmethod :: souptype t)

A trick I am then using in these cases is the following:

 class Soup a where
   soupmethod :: Souptype a

 class SoupList a where
   souplistmethod :: Souptype [a]

 instance SoupList a = Soup [a] where
   soupmethod = souplistmethod

 instance SoupList Char where
   souplistmethod = stringsoupmethod

 instance SoupList t = SoupList [t] where
   souplistmethod = souplistfunctor souplistmethod

Maybe it does not give you all the power you want,
but in simple cases it serves me well.


Cheers,

Wolfram







Diagonalisation

1999-07-14 Thread Wolfram Kahl


Jón Fairbairn  [EMAIL PROTECTED] writes:
  
   diagonalise:: [[a]] - [a]
   diagonalise l = d [] l
  
  
   d [] [] = []
   d acc [] = --  d [] acc would do, but muddles the order;
  heads acc ++ d (rests acc) []
   d ls (l1:rest) = heads (ls') ++ d (rests ls') rest
where ls' = l1: ls
  
   heads l = [a | (a: _) - l]
   rests l = [as | (_: as) - l]
  
This differs from the versions given by Mark Jones, Stefan Kahrs and myself
in that it does the diagonals ``upside down''.

However, methodologically it is actually very close to my version,
and even closer to some predecessor of that that I do not have anymore.
In fact, if I introduce into DiagWK3 the corresponding muddling avoidance,
we get:

DiagWK4:

 diag :: [[a]] - [a]
 diag [] = []
 diag (l:ls) = split id [l] ls
  where
   split :: ([[a]] - [[a]]) - [[a]] - [[a]] - [a]
   split a [] ls  = case ls of
  [] - split id [] (a [])
  (l:ls) - split id (a [l]) ls
   split a (l : ls) r = case l of
  [] - split a ls r
  (x:xs) - x : split (a . (xs :)) ls r


Then a tiny change turns the diagonals ``upside down'', too:

DiagWK5:

 diag :: [[a]] - [a]
 diag [] = []
 diag (l:ls) = split id [l] ls
  where
   split :: ([[a]] - [[a]]) - [[a]] - [[a]] - [a]
   split a [] ls  = case ls of
  [] - split id [] (a [])
  (l:ls) - split id (l : (a [])) ls -- CHANGED!!
   split a (l : ls) r = case l of
  [] - split a ls r
  (x:xs) - x : split (a . (xs :)) ls r

The timing differences between DiagJF and DiagWK5 are probably
mostly due to some combination of

* the double list analysis effort in heads and rests
* and the tradeoff between lists with (++) and functions with (.).

It might be interesting to look into the compiled code in detail.


   220  200  500

DiagMPJ   0:00.16  0:02.32  0:37.55  1:48.09
DiagMPJ1  0:00.12  0:01.50  0:23.83  1:06.71
DiagMPJ1a 0:00.12  0:01.47  0:23.54  1:06.04

DiagJF0:00.13  0:01.64  0:21.85  0:57.99

DiagWK3   0:00.12  0:01.34  0:18.82  0:52.31
DiagWK4   0:00.11  0:01.33  0:19.29  0:53.22
DiagWK5   0:00.12  0:01.34  0:18.92  0:52.06


Wolfram





Re: Deriving Enum

1999-07-12 Thread Wolfram Kahl

Mark P Jones [EMAIL PROTECTED] writes:
  
  Here's my definition of an integer free diagonalization function.
  [..] As written, I think
  it is a nice example of programming with higher-order functions,
  and, in particular, using function composition to construct a
  pipelined program:
  
   diag :: [[a]] - [a]
   diag  = concat . foldr skew [] . map (map (\x - [x]))
   where skew [] ys = ys
 skew (x:xs) ys = x : comb (++) xs ys
  
  This uses an auxiliary function comb, which is like zipWith
  except that it doesn't throw away the tail of one list when it
  reaches the end of the other:
  
   comb:: (a - a - a) - [a] - [a] - [a]
   comb f (x:xs) (y:ys) = f x y : comb f xs ys
   comb f [] ys = ys
   comb f xs [] = xs

This is in fact much nicer and much more intuitive than my version!

The only improvement that comes to my mind is to apply the equality

   ([x] ++) = (x :)

wherever possible:

 diag :: [[a]] - [a]
 diag  = concat . foldr skew []
 where skew [] ys = ys
   skew (x:xs) ys = [x] : conscomb xs ys

 conscomb   :: [a] - [[a]] - [[a]]
 conscomb (x:xs) (y:ys) = (x : y) : conscomb xs ys
 conscomb [] ys = ys
 conscomb xs [] = map (:[]) xs

Perhaps this looks slightly less elegant, but it claws back quite
some efficiency: I compiled both versions (as DiagMPJ and DiagMPJ1)
and my original version (as DiagWK) with ghc-4.04 (perhaps a week old)
with ``-O'' and with the following test module
(substitute the respective definitions for ``@1''):

 module Main where

 import System

@1

 test = [[(x,y) | x - [1..]] | y - [1..]]
 main = do
  (arg : _) - System.getArgs
  let n = (read arg :: Int)
  print (length (show (take n (diag test

(I also tried Tom Pledgers second version, but it runs out of stack space...)

Best times out of five runs where:


Argument:  220  200

DiagMPJ   0:00.16  0:02.32  0:37.55
DiagMPJ1  0:00.12  0:01.50  0:23.83
DiagWK0:00.11  0:01.33  0:19.10


This is not the first time that I get the impression that,
at least with current implementations,
functions are unbeatable when it comes to efficient data structures ---
perhaps this is why it is called functional programming ;-)


Cheers,

Wolfram






Diagonalisations (was: Re: Deriving Enum)

1999-07-12 Thread Wolfram Kahl


In the meantime I have discovered a flaw in my original
diagonalisation in that it looped in finite cases.

This can easily be mended:

DiagWK1:

 diag :: [[a]] - [a]
 diag = f id where
  f :: ([[a]] - [[a]]) - [[a]] - [a]
  f a [] = split id (a []) []
  f a (l:ls) = split id (a [l]) ls
  split :: ([[a]] - [[a]]) - [[a]] - [[a]] - [a]
  split a [] [] = diag (a [])  -- this line was originally missing
  split a [] r = f a r
  split a ([] : ls) r = split a ls r
  split a ((x:xs) : ls) r = x : split (a . (xs :)) ls r

Now we can eliminate f:

DiagWK2:

 diag :: [[a]] - [a]
 diag [] = []
 diag (l:ls) = split id [l] ls
  where
   split :: ([[a]] - [[a]]) - [[a]] - [[a]] - [a]
   split a [][] = diag (a [])
   split a [](l:ls) = split id   (a [l]) ls
   split a ([] : ls) r  = split als  r
   split a ((x:xs) : ls) r  = x : split (a . (xs :)) ls  r

Another, equivalent version:

DiagWk3:

 diag :: [[a]] - [a]
 diag [] = []
 diag (l:ls) = split id [l] ls
  where
   split :: ([[a]] - [[a]]) - [[a]] - [[a]] - [a]
   split a [] ls  = case ls of
  [] - diag (a [])
  (l:ls) - split id   (a [l]) ls
   split a (l : ls) r = case l of
  [] - split als  r
  (x:xs) - x : split (a . (xs :)) ls  r

The timimgs are now:


   220  200

DiagMPJ   0:00.16  0:02.32  0:37.55
DiagMPJ1  0:00.12  0:01.50  0:23.83
DiagWK1   0:00.12  0:01.34  0:19.02
DiagWK2   0:00.12  0:01.35  0:19.09
DiagWK3   0:00.12  0:01.34  0:18.82


The only thing that surprises me is
that the compiler does not do the optimization from DiagWK2 to DiagWK3 itself.


Wolfram





Re: Deriving Enum

1999-07-11 Thread Wolfram Kahl

Koen Claessen [EMAIL PROTECTED]
proposes the following diagonalisation function:
  
[ (a,b) | (a,b) - [1..] // [1..] ]
  
  For a suitable definition of (//), for example:
  
(//) :: [a] - [b] - [(a,b)]  
xs // ys = diagonalize 1 [[(x,y) | x - xs] | y - ys]
 where
  diagonalize n xss = 
xs ++ diagonalize (n+1) (xss1 ++ xss2)
   where
(xs,xss1) = unzip [ (x,xs) | (x:xs) - take n xss ]
xss2  = drop n xss
  
  And it works for any type.

The core function here is

 (diagonalize (1 :: Integer)) :: [[a]] - [a]

This function diagonalises finite or infinite lists
with arbitrary finite or infinite element lists.


To me, it seems unsatisfactory to have a solution to this pure list problem
with auxiliary functions relying on integers.


It turns out to be a nice exercise to implement

 diagonalise :: [[a]] - [a]

without any reference to numbers.

Spoiler ahead:
I arrive at the appended version (which does not even use pairs).


Have fun!

Wolfram




   Warning:  Spoiler ahead!!



















 diagonalise :: [[a]] - [a]
 diagonalise = f id
  where
   f   :: ([[a]] - [[a]]) - [[a]] - [a]
   f a []  =  split id (a []) []
   f a (l:ls)  =  split id (a [l]) ls
   split   :: ([[a]] - [[a]]) - [[a]] - [[a]] - [a]
   split a [] r=  f a r
   split a ([] : ls) r =  split a ls r
   split a ((x:xs) : ls) r =  x : split (a . (xs :)) ls r






Re: Projects using HUGS or Haskell

1999-06-11 Thread Wolfram Kahl

Simon Peyton-Jones [EMAIL PROTECTED] writes:
  What I have not done (any volunteers) is to export these rules, or
  the function definitions to a thm prover.  

I am in the course of exporting function definitions
(and later probably also rules)
to the term graph transformation system HOPS
(
  URL: http://www2.informatik.unibw-muenchen.de/kahl/HOPS/
)
which can also be considered as a fledgling theorem prover ---
anyway I expect the problems to be essentially the same.

I am trying to finish a short summary next week...

Wolfram






Parsing qualified types

1999-06-08 Thread Wolfram Kahl


Consider the following two Modules:

File A.lhs:
==
 module A where

 data A a = MkA a
==

File B.lhs:
==
 module B where

 import qualified A

 type A a = A.A a--- Problem!
==

ghc-4.03 (current sources) complains while compiling B.lhs:

B.lhs:5: parse error on input `.'

(Renaming all the types does not make a difference.)

Both ghc-3.02 and Hugs 98 (May 1999) are happy with these modules,
and I also cannot detect anything in the Haskell98 report
justifying this rejection.


Best regards,

Wolfram


P.S.: When I call ``make tags'' from fptools/ghc/compiler,
  the output consists mostly of error messages concerning the absence
  of hsp:

cpp: output pipe has been closed
sh: /var/tmp/kahl/ghc/current/build-4.03/ghc/compiler/hsp: No such file or directory



Re: Here's a puzzle to fry your brains ....

1999-04-29 Thread Wolfram Kahl


John Launchbury posed a nice puzzle about
mutual recursive bindings in the do notation:



test :: [Int]
test = do (x,z) - [(y,1),(2*y,2), (3*y,3)]
  Just y - map isEven [z .. 2*z]
  return (x+y)

isEven x = if even x then Just x else Nothing

-

I would consider it as equivalent to:

-

testTrans :: [Int]
testTrans = do (x,z) - [(\y - y,1),(\y - 2*y,2), (\y - 3*y,3)]
   Just y - map isEven [z .. 2*z]
   return (x y+y)

isEven x = if even x then Just x else Nothing

-

which hugs accepts and evaluates to [4,6,12,16,24].

The principle is to consider variables (x) bound to expressions containing
later-bound variables (y) as functions accepting values for y.
When x is used, the current y is supplied.


MORE GENERAL would be to consider the following equivalent version
of the original program:

-

test' :: [Int]
test' = do (x,z) - return (y,1)   ++
return (2*y,2) ++
return (3*y,3)
  Just y - map isEven [z .. 2*z]
  return (x+y)

-

and adapt all occurrences of return
to returning functions accepting the later-bound variables.

Patterns fed by such returns are replaced by new function variables (f).

Variables bound in patterns fed by such returns
occur in two situation:

1: (z) before the later bindings:
  Apply to undefined and extract from the pattern,

2: (x) after the later bindings:
  Apply to the now bound variable and extract from the pattern.

-
testTrans' :: [Int]
testTrans' = do f - return (\y - (  y,1)) ++
 return (\y - (2*y,2)) ++
 return (\y - (3*y,3))
let z = snd (f undefined)
Just y - map isEven [z .. 2*z]
let x = fst (f y)
return (x + y)
-

Hugs says:

Main testTrans'
[4,6,12,16,24]


How about it?


Best regards,

Wolfram





Re: Haskell 98 library: Directory.lhs

1999-03-11 Thread Wolfram Kahl


Simon Peyton-Jones proposes:

   A Haskell 98 addendum
 [ ... ]
  Well, the bits are frozen, but I propose to regard this as a gross
  "typo" and add it to the typos page. 
 [ ... ]
  So the "typo" fix I propose is
 [ ... ]

  Any objections?



Call it Haskell 1.6 ;-)


Best,

Wolfram






Re: Partial Type Declarations

1999-01-16 Thread Wolfram Kahl

Jeffrey R. Lewis" [EMAIL PROTECTED] writes:
  
  foo := C a = a - b  roughly equiv to foo :: C _a = _a - _b
  
  I can easily imagine that you might want some variables to be a bound, and
  others to be exact, as in
  
  foo :: C a = a - _b
  
  I don't think the above can be expressed with Claus' proposal.
  

You could, by being more explicit than usual:

 foo := forall b = C a = a - b


Wolfram



Re: Partial Type Declarations

1999-01-16 Thread Wolfram Kahl

To my last message:
  Jeffrey R. Lewis" [EMAIL PROTECTED] writes:

foo := C a = a - b  roughly equiv to foo :: C _a = _a - _b

I can easily imagine that you might want some variables to be a bound, and
others to be exact, as in

foo :: C a = a - _b

I don't think the above can be expressed with Claus' proposal.

  
  You could, by being more explicit than usual:
  
   foo := forall b = C a = a - b
  
I forgot to add:

and this is exactly where problems begin,
since, besides harmless cases like

 foo :: forall b = C int = int - b

you also want to allow instantiations such as

 foo :: forall b = C [b] = [b] - b

Therefore

 * either the ``metavariables'' approach makes sense,
   where metavariables are context holes, and
   we allow context substitution

 * or we allow second order substitution.

In this second case, we have to distinguish between

 foo1 := forall b = C a = a - b

which allows

 foo1 :: forall b = C int = int - b

but not

 foo1 :: forall b = C [b] = [b] - b

and

 foo2 := forall b = C (a b) = (a b) - b

which allows both of


 foo2 :: forall b = C int = int - b

and

 foo2 :: forall b = C [b] = [b] - b


This second approach seems to be preferrable to me.


Wolfram



Re: Partial Type Declarations

1999-01-16 Thread Wolfram Kahl



Starting from Jeffrey R. Lewis' [EMAIL PROTECTED] wish to
let partial type declarations express binding of SOME type variables

  foo :: C a = a - _b

and modulating the syntax proposed by Claus Reinke [EMAIL PROTECTED],

  foo := C a = a - b

I suggested the following notation to express this desire:

 foo2 := forall b = C (a b) = (a b) - b


This can also be seen as syntactic sugar
for an assertion that can be checked at compile-time
(a type judgement assertion):

 assert (exists a = foo2 :: forall b = C (a b) = (a b) - b)

This way we get even closer to existing syntax,
although it is of course a departure from
the original intention of Koen Claessen [EMAIL PROTECTED]
to hide subtleties of Haskell's type system from DSL users.

So building on such an explicit assertion syntax,
we might easily adapt some variation of any of the proposals as syntactic sugar.


Nevertheless, I think that the distiction between

 assert (exists a = foo1 :: forall b = C a = a - b)

and

 assert (exists a = foo2 :: forall b = C (a b) = (a b) - b)

should not be blurred.

Am I too puristic here?



Cheers,

Wolfram



Re: Partial Type Declarations

1999-01-16 Thread Wolfram Kahl

Jeffrey R. Lewis" [EMAIL PROTECTED] wrote:
  
  Anyway, the only thing missing now in the above proposal
  is a similar flexibility with contexts.
  Say, you want `b' to be a bound, and thus use :=,
  but you want the context to be exact
  (i.e. you don't want extra context elements to be allowed).
  I can't think of any particularly compelling notation for
  "the context is `C a' and no more".
  In this case, the combination of allowing `...' to extend contexts,
  and _a (or similar annotation) for unquantified type variables seems
  more natural.

Getting more and more explicit,
we may well distinguish between

 assert (exists a :: * - *,
D :: * - Context
E :: * - Context
F :: Context
 = 
foo3 :: (forall b :: *
 =
 C (a b), D (a b), E b, F
 =
 (a b) - b))

and

 assert (exists a :: * - *,
D :: * - Context
E :: * - Context
 = 
foo3 :: (forall b :: *
 =
 C (a b), D (a b), E b
 =
 (a b) - b))

or any other specialisations.

Now the DSL user really gets lost,
and we should consult Simon Marlow
for an appropriate layout rule for types ;-)


But the expressivity is ours ;-)


Have fun,

Wolfram



Existentially quantified types and ``deriving Eq''

1998-10-20 Thread Wolfram Kahl

Hello!

The following datatype declaration would, if possible,
actually be very useful for an application I have in mind:


 module Test(V(..)) where
 
 import ST
 
 data V s = forall a . MkV (STRef s a)  deriving Eq


But when compiling it with Ghc-4.00 I get:

==

ecserver ~~ ghc -fglasgow-exts -c test.hs
 
test.hs:5:
Inferred type is less polymorphic than expected
Quantified type variable `a' is unified with `a1'
When checking an existential pattern that binds
a1 :: STRef s a1
b1 :: STRef s1 a
In an equation for function `==':
== (MkV a1) (MkV b1) = (a1 PrelBase.== b1)
In the definition for method `=='


Compilation had errors

==

(Essentially the same happens in Hbc.)

Do I have to understand this error message as signalling that
the ``deriving'' mechanism may not yet be fully aware
of existentially quantified constructors?
(It should be prepared that the rule

== (MkV a1) (MkV b1) = (a1 PrelBase.== b1)

 may not be applicable for typing reasons,
 i.e., before the program-side pattern ``== (MkV a1) (MkV b1)''
 is matched, the typing pattern induced by it should be allowed to fail.)

Or is this a design design that I just could not find
any documentation for?

Would other people also like to derive classes in such a way for
existentially quantified datatypes?


BTW, the sparc-sun-solaris binary is not at the end of its link
in the Ghc download page.



Best regards,


Wolfram



Re: Proofs and development

1998-05-28 Thread Wolfram Kahl

Alan Grover ([EMAIL PROTECTED]) writes (on the Clean list):

  I'd then like to modify the source code adding the better algorithm, but I'd
  like to keep the original algorithm as the documentation. The
  language/system should then help me prove that the better version is
  equivalent. I feel that this helps keep the source code understandable, and
  connects it more directly with the requirements and specifications.
   
  ...
   
   
  This all has relation to literate programming, does anyone else have
  interest in that?

This kind of approach is big part of the motivation behind my
graphically interactive strongly typed term graph program development
and transformation system HOPS (work in progress), see

URL: http://diogenes.informatik.unibw-muenchen.de:8080/kahl/HOPS/

HOPS in principle is a language-independent term graph programming
framework with support for user-defined text generation from term graphs.
The only constraint are the kind of term graphs supported (with
explicit variable binding and variable identity, and with second-order
rewriting) and the typing system (essentially simply-typed lambda-calculus
with type variables).

HOPS distinguishes

- ``declarations'' (type signatures in Haskell) that introduce a new
  identifier with its typing, and

- ``rules'', which are term graph transformation rules normally
  considered as semantics preserving equations.

  Rules in a certain shape can be used as definitions in functional
  programming languages; other rules may serve as theorems or lemmata.

  The possibility of classifying declarations and rules into groups
  according to their perceived role is not yet fully implemented;
  considerable flexibility is needed here since among the many possible
  definitions: one may be the ``documentation version'', one may be
  best for this compiler, another may be best for that compiler,
  yet another may be best as a starting point for transformation into
  a different paradigm. 


However, attributes on types (such as uniqueness or Haskell's type classes)
are not implemented yet, and currently types are always maximally identified,
which is incompatible with unique types.

Therefore, the current version of HOPS best supports a Haskellish style
of programming, and in the supplied rudimentary standard library
only output to Haskell is currently supported (to a certain extent).



Wolfram