[Haskell-cafe] Re: [Haskell] ANNOUNCE: jhc 0.6.1

2009-06-24 Thread David Barton
Switching to Haskell Cafe; I hope you read that list, John, since it 
seems more suitable to this kind of question.


John Meacham wrote:

Hi, this is to announce the release of jhc 0.6.1. The jhc homepage with
distribution information is at http://repetae.net/computer/jhc/ 


The main new feature in this release is a much simplified
cross-compilation mechanism. While cross-compilation was always possible
with jhc, it used to involve manually copying the C file and calling gcc
with the right options on it, now this is taken care of by jhc. 


A (popular) example would be setting up an iPhone cross compilation
target. For instance with the SDK setup I have, I would simply add the
following to a file ~/.jhc/targets.ini 


[iphone]
cc=arm-apple-darwin
cflags+=-I/usr/local/arm-apple-darwin/include
merge=le32 


then you can compile iphone binaries with

; jhc --cross -miphone HelloWorld.hs

the targets mechanism is extensible at run-time and I have included
native unix, win32, osx-intel and osx-powerpc targets. But certainly
many more interesting ones are possible. Some I have tested have been a
nokia N770 as a target and an atheros MIPS based router running dd-wrt.
Maximum coolness!  When you were targeting the nokia, how did you handle 
the radically different user interface?  Did you have to establish a 
mapping from one of the Haskell UI packages to the Nokia equivalents?  
If so, which one did you pick, and how much time did it take?


I'd love it if I could get this working for the Palm

Dave Barton
(soon to be) University of Toronto
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Cabal: error on configure

2008-10-07 Thread David Barton
OK, I suspect this is a real newbie error, but please have mercy.  I have 
downloaded and installed cabal (at least it responds to the --help command 
from the command line).  Yet when I do, say (to give a real example):


cabal configure parameterized_ data

(having done he fetch)  I get this error:

cabal.exe: Using 'build-type Custom' but there is no Setup.hs or Setup.lhs 
script.'


When I download the package manually and look, there is a perfectly 
servicable Setup.hs script, which I call manually.


So what gives?

Dave Barton 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Do you trust Wikipedia?

2007-10-18 Thread David Barton
The trustworthy articles on Wikipedia have references that can be checked, 
and read.  The ones without references are not to be trusted..


Dave Barton
- Original Message - 
From: Philippa Cowderoy [EMAIL PROTECTED]

To: PR Stanley [EMAIL PROTECTED]
Cc: haskell-cafe@haskell.org
Sent: Thursday, October 18, 2007 10:28 AM
Subject: Re: [Haskell-cafe] Do you trust Wikipedia?



On Thu, 18 Oct 2007, PR Stanley wrote:


Hi
Do you trust mathematical materials on Wikipedia?
Paul



To a first approximation - trust but verify.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Modelling languages for FP (like UML for OO)

2006-01-19 Thread David Barton


Philippa Cowderoy and Mads Lindstrom wrote:
- Original Message - 
From: Philippa Cowderoy [EMAIL PROTECTED]

To: Mads Lindstrøm [EMAIL PROTECTED]
Cc: haskell@haskell.org
Sent: Thursday, January 19, 2006 8:16 AM
Subject: Re: [Haskell] Modelling languages for FP (like UML for OO)

On Thu, 19 Jan 2006, Mads [ISO-8859-1] Lindstrøm wrote:


Hi all

In object-oriented programming, UML is used to model programs. In
functional programming (especially Haskell) we use ???



Haskell :-)


I am mainly interested in the macro level. That is modules, classes,
class instances, ... Not in modellering the internals of a function.



I tend to just write the equivalent haskell code with the function
definitions left out. I suspect the only thing that can really be gained
with a more graphical language is drawing dependancy lines between
modules and the like - though I was never big on the formalised modelling
languages in the first place, so I may not be the best person to conclude
that.
---
I do not wish to take away anything from Philippa's reply, but I do wish to
toot my own project.  Dr. Perry Alexander and myself are in the final stages
of moving a Systems Level Design Language (SLDL) to standardization.  It is
called Rosetta, and is suited to the kind of modeling you are referring to
(among other things).  It is logic based rather than lamda calculus based,
but the execution subset that is coming to be a kind of defacto standard
(as opposed to the standard itself) is pretty much Haskell --- the CADSTONE
product is called R-Haskell, as it translates that subset directly into
Haskell.

More information may be found at http://www.sldl.org

And, by the way, we at EDAptive Computing are looking for someone to work on
our own Rosetta projects, particularly with direct experience with theorem
provers and model checkers --- using as much as designing and implementing.
If anyone is interested, or has any questions about Rosetta, by all means
drop me a line.

Dave Barton
EDAptive Computing 



___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] Functions with side-effects?

2005-12-21 Thread David Barton

Wolfgang Jeltsch writes:
- Original Message - 


Am Mittwoch, 21. Dezember 2005 13:15 schrieb Creighton Hogg:

[...]



Monads, I believe, can be just thought of as containers for state.


I would say that you are talking especially about the I/O monad here.  A 
monad

as such is a rather general concept like a group is in algebra.


While this is correct, I'm afraid that for most of us it is a flavorless 
answer.  I wish I had the mathematical mind that made the word group in 
this context instantly intuitively recognizable, but I don't.


I think Phil Wadler said it best when he said that a monad is a 
*computation*.  If a function is a mapping between input and output values, 
a computation is both an invocation of a function and the provision of 
values --- which can include state, ordering, and many other things.  Of 
course, I'm a Phil Wadler fan anyway.


The important point of the integration of imperative programming in 
Haskell is
not that it's done using monads.  The point is that you have a specific 
type

(IO) whose values are descriptions of I/O actions, and some primitive
operations on IO values.  The IO type together with two of these primitive
operations forms a monad but this is secondary in my opinion.


Yes and no.  It is important for me, at least, to continue to grasp that IO 
is just not a functional thing --- it is not captured intuitively in a 
function.  Rather, it is a computation --- IO doesn't make sense until it 
executes in an environment which it can effect.  This is why we capture IO 
(as well as other computational concepts) in monads, and why (again IMHO) 
mondadic IO is so much more effective and intuitive than continuation style 
IO or stream based IO ever was.


Dave Barton


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Updating the Haskell Standard

2005-07-20 Thread David Barton

John Goerzen writes:


There was a brief discussion on #haskell today about the Haskell
standard.  I'd like to get opinions from more people, and ask if there
is any effort being done in this direction presently.


snip


I know that some people would like to hold off on such a process until
their favorite feature (we'll call it feature X) is finished.  I would
argue that incremental addendums to the standard should be made more
frequently, so that new features can be standardized more easily.

Thoughts?


I can contribute some experience from commercial standardization efforts. 
ANSI, IEEE, and ISO standards require re-ballotting every five years, 
otherwise the standards lapse.  Reballotting may or may not be accompanied 
by changes in the standard; for a standard as complex as a language, new 
versions at least every five years seems to be fairly common with newer 
standards (ANSI C has not changed in newer standardization ballots as far as 
I know).


The trade-off for standards is between stability for tool developers and 
learners and stagnation.  If the standard changes too often, there will be 
only one developer (the one effectively in charge of the standard) and it 
will tend to not be taught anywhere (because what students learn is obsolete 
too quickly).  If the standard is unchanged too long, it becomes irrelevant 
and obsolete and no one pays attention to it.  Five years is what the 
general industry seems to have settled on as a good average, but it may or 
may not apply here; the circumstances are different.  Developers of Haskell 
are pretty much volunteers and academics; that changes things.  On the other 
hand, it is a rapidly developing field.


How all this shakes out is something for the community at large to decide; 
however, that is what happens in other standards bodies.


Dave Barton
EDAptive Computing


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Newbie : How come that cyclic recursive lists areefficient ?

2005-01-25 Thread David Barton
Benjamin Fransen writes:

 There *is no* difference between the two if one views them as pure
 mathematical values. Questions of run time speed or memory usage, i.e.
 efficiency (which your original question was about) are clearly outside
the
 realm of pure values, and thus we may perceive them as distinct in this
wider
 setting.

 My favourite analogy for this is the old joke about a topologist being a
 person who cannot see any difference between a cup and a doghnut.

The engineer's response, of course, at the thought of ignoring questions
about run time speed and memory usage, is that a topologist is a person who
doesn't know his ass from a hole in the ground.

(I was told this quote was actually from abstract algebraists, when
confronted by the famous description of a topologist, but what the heck.)

Dave Barton
EDAptive Computing


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The implementation of functional languages

2004-09-21 Thread David Barton
John Meacham writes:
  I am looking for the book The implementation of Functional
  Programming languages by S. L. Peyton Jones.

  This book is out of print and currently there is no electronic version
  of it. The Haskell bookstore folk are working on reconstructing it and
  making it available for print-on-demand,
  http://www.cafepress.com/haskell_books/, but it's not clear when
  exactly it will be available.
 
  Your other option is to try to find a used copy, but they are pretty
  expensive.

 I am working on getting that book available in the haskell bookstore. I
 searched quite a while before I found a used printed copy at a
 reasonable price and my search was part of my motivation for creating
 the bookstore.

 It is a bit trickier than the other books on the site because I only
 have a scanned in copy of the print version to work with, rather than
 LaTeX source. but I should have time this week to get it online.
 John

My wife (mainly) and I, with Simon's permission, have been working on
getting a web-enabled version of this available for quite some time.  It
hovers on the brink of completion, and should be there Real Soon Now as
well.  This will include a web enabled table of contents and next and back
buttons.

If I'd known how much time she would put in, I'd have never asked her for a
small favor...

Dave Barton
EDAptive Computing


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Graphical Programming Environments (was: ANNOUNCE: Release of Vit al, an interactive visual programming environment for Haskell)

2003-11-13 Thread David Barton
I love religious wars.

Having been around awhile, I make a prediction.  This will thrash a while,
those who like graphical environments will make their points, those who like
textual environments will make their points, no one will convince anyone
else, and eventually it will die down.

In fact (in my opinion), people operate differently.  Some operate better
graphically, some operate better textually, and I'm glad both tools are
available.  Me, I'm a text person, but I know folks who think better in
pictures, bless 'em.

Let the games begin.

Dave Barton
EDAptive Computing



___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Typing units correctly

2001-02-13 Thread David Barton

Tom Pledger writes:

   In both of those cases, the apparent non-integer dimension is
   accompanied by a particular unit (km, V).  So, could they equally
   well be handled by stripping away the units and exponentiating a
   dimensionless number?  For example:

   (x / 1V) ^ y


I think not.  The "Dimension Types" paper really is excellent, and
makes the distinction between the necessity of exponents on the
dimensions and the exponents on the numbers very clear; I commend it
to everyone in this discussion.  The two things (a number of "square
root volts" and a "number of volts to an exponent" are different
things, unless you are simply trying to represent a ground number as
an expression!

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.averstar.com/~dlb

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Dear Santa (Error Messages)

1999-09-15 Thread David Barton

Simon Marlow writes:

   That should be http:\\www.cs.uu.nl\groups\ST\Software\Parse, I
   think.

Hey, I just grabbed the link reference from his file :-).

   "blazingly fast" isn't very useful.  Show me the NUMBERS :-)

Well, my Rosetta grammer wouldn't be very useful.  Grab them yourself,
or refer to his paper, which has some good graphs.


Dave Barton *
[EMAIL PROTECTED] )0(
http://www.averstar.com/~dlb



Re: Dear Santa (Error Messages)

1999-09-14 Thread David Barton

George Russell writes:

   Parser combinators don't actually seem to analyse the grammar at
   compile time at all, and instead just try all possibilities.  This
   looks like stone-age technology to me.  The first version of MLj
   was written with parser combinators.  As a result the parsing was
   much much slower, even after various exponential blow-ups had been
   painfully tracked down and removed.  Error correction was hopeless.
   And worst of all, there were a number of lurking ambiguities in the
   grammar which weren't discovered until it was exposed to the rigour
   of LALR parsing.

You simply haven't looked at the latest version of Doaitse Swierstra's
LL(k) parsing combinators.  They are blazingly fast (as far as I can
tell), do good error correction (sorry, for those folks that don't
like it --- it can be disabled), and seem to detect ambiguities
nicely.

Check out http:\\www.cs.uu.nl\groups\ST\Parse for more information.

I've been using them, and they really are neat.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.averstar.com/~dlb



Re: April fools joke

1999-05-19 Thread David Barton

Has anyone written the poor guy, perchance to offer him a small clue?

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.averstar.com/~dlb





Re: Haskell: the Ultimate Algebraist?

1999-05-07 Thread David Barton

Jerzy Karczmarczuk writes:

   I am afraid that Sergey is dreaming of transforming Haskell into a
   universal Computer Algebra system. We know for more than 30 years
   that the general problem of algebraic simplification is a mess. OK,
   some polynomial transformations can be done, but I suspect that
   enhancing a parser with such rule recognition system containing
   obviously the non-linear pattern matching (if not a full
   unifier...) seems a deadly idea.

Boy, do I agree with this.

What began with a fairly limited, and practical, suggestion on Simon's
part to assist the compiler with optimizations and transformations
that are valid in some cases and not in others has blossomed into a
search for a full logical language, with inference, proof checking,
and all the rest.

Look, if you want a logical language, go for it.  Frankly, I am in the
throes of Language Puppy Love(tm) with Maude right this second (those
who are interested, check out http://maude.csl.sri.com).  Neat stuff.
But that doesn't mean I want to twist Haskell to fit that frame.  Nor
does it mean that I want to abandon Haskell and do all my graphics
interface programming and scripting via term rewriting logic.  Have no
fear, Haskell, you are still my first love!

Please, let's not try to twist Haskell into something it's not, and
was not designed to be.  I'm still thinking about Simon's proposal,
but at the very least we should limit it to those places where it is
practically advantageous, rather than attempting to extend it beyond
the point where it is, clearly and straightforwardly, to our direct
(implementational!) benefit.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.averstar.com/~dlb





Re: Couldn't find isAlphaNum

1999-02-15 Thread David Barton

Sigbjorn writes:

   Weird - are you sure it was capitalised as Haskell now prescribes?

Well, *that* makes me feel dumb.  That was the problem, indeed.  I ask
pardon for my blindness.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.averstar.com/~dlb



Couldn't find isAlphaNum

1999-02-13 Thread David Barton

While running the new 4.02 on a Linux box, I got a "could not find
isAlphaNum" error.  I looked at the .hi file, and it seemed OK;
however, switching to the definition of the expressions using
"isAlpha" and "isDigit" solved the problem.  Don't know what's wrong,
but

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.averstar.com/~dlb



Re: syntactic sugar for arrows

1999-01-28 Thread David Barton

Michael Hobbs writes:

   Has anyone else read this paper? I'm interested in hearing
   comments, if only to point out some things that I may have
   missed. I'll admit, I haven't read the entire paper. I gave up
   after the 16th page, because it was so conceptually unwieldy. It's
   not that I had difficulty understanding how the system works, it's
   just that I found it difficult to believe that such a complex
   system would be useful in general practice. (Also, I'm not a
   mathematician who does a significant amount of work in category
   theory, so that may contribute to its apparent awkwardness to me.)

Hmmm.  *If* you believe that monads are a useful construct, and
understand them to an extent, then it is not clear to me why you would
have that much difficulty here.  (Sorry, that came out as very
negative --- I just mean that the concepts are so similar that an
understanding of one would seem to me to carry over fairly easily to
an understanding of the other.  I meant no disparagement.)

Part of this may be experience.  I have been working with Swierstra
and Duponcheel's parsing combinators, including modifying and adding
to them, for a few weeks now.  For one thing, they really are
noticeably faster for grammars that are not too complex --- those
things really zip along, and they are not yet even optimized for table
lookup (something I mean to do soon).  Therefore, I have no doubt that
being able to separtate out the static and dynamic information, and
operate on them separately, is a useful thing; I have already
encountered the utility myself.

Don't worry too much about the math.  In fact, I think John did well
there, emphasizing the utility of the arrows and not bringing up the
necessary laws until the final sections of the paper.  If you have not
looked at the Swierstra and Duponcheel paper, it really might help
explain why all this is necessary (although, believe me, it is not an
easy paper either --- I had a much harder time with it than I did with
John's).  Doaitse Swierstra has made the paper, and a working copy of
the combinators, available for download at:

http://www.cs.ruu.nl/groups/ST/Software/Parse/

While you're in the area, you might take a peek at their attribute
grammar system.  It's another example of where separating out static
and dynamic calculations might be of benefit.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.averstar.com/~dlb





Re: Implementation of list concat and subclass condition

1999-01-22 Thread David Barton

Peter M|ller Neergaard writes:

   1) The implementation of list concatenation ++.  In the Haskell
  report it is stated that ++ in general is an operator on monads.
  In the case of lists, ++ works as list concatenation.  However,
  I cannot find any place describing whether the implementation of
  ++ is supposed to be better than the straight-forward:

[] ++ ys = ys 
(x:xs) ++ ys = x : (xs ++ ys)


See Okasaki, "Purely Functional Data Structures" for a discussion on
catenable list and queues.  But keep in mind that you may not need it;
while the running time of this algorithm is O(n), in most contexts you
will be accessing the resulting list from the head anyway.  This means
that the cost of the concatenation can be amortized over the list
access, leading to amortized O(1) running time.  Even if you don't
access the entire list, lazy evaluation means the unaccessed part of
the list won't be evaluated.  Again, Okasaki gives a good summary of
amortized cost in the presence of lazy evaluation, and of methods of
proving amortized cost bounds.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.averstar.com/~dlb






Re: .hi error (probably)

1999-01-05 Thread David Barton

Nah, not as long as I know about it.  Thanks a million; I'll change
things as necessary.

Perhaps a line in the user's manual might help --- it wasn't clear to
me from reading it that modules that inherit non-standard modules must
also use the appropriate flags.

Come to think of it, this is the kind of thing I should at least try
before I bother the bugs list, darn it.  Live and learn --- I'll try
to investigate better next time.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.averstar.com/~dlb



.hi error (probably)

1999-01-04 Thread David Barton

Got a strange one.  I am compiling two files using GHC, one of which
depends on the other.  The first compiles just fine, but the second
compile gives an error on the *hi* file of the first.  Specifically:

dlb@hudson temp]$ ghc -c -fallow-undecidable-instances -fglasgow-exts 
-fallow-overlapping-instances Structures.hs
NOTE: Simplifier still going after 4 iterations; bailing out.
ghc: module version changed to 1; reason: no old .hi file
[dlb@hudson temp]$ ghc -c SymbolTable.hs
 
Structures.hi:47:
Too many parameters for class `SortedList'
In the class declaration for `SortedList'


Compilation had errors

This is part of a much larger system; however, it seems to be
contained to these two files (as above).  Note the experimental
flags.  I am using ghc 4.01, patch level 0, on a Linux system running
RedHat 5.0.

I would be happy to tar up the files, run it with -v, or whatever if
this is not a known bug.  This includes both the source and the .hi
files, if that helps.

Happy new year, everyone.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.averstar.com/~dlb



Re: Interesting class and instance question

1998-12-08 Thread David Barton

Simon, thanks a lot.  You write:

   I don't think you can avoid this. You have two type constructors:

   class SortedList s a where ...
   class FiniteMap m k a where ...

   s has kind *-*
   m has kind *-*-*

   You want to say

   instance SortedList s (Pair k a) = FiniteMap m k a where...

   but there's a relationship between s and m, namely

   m k a = s (Pair k a)

   That is the relationship your MkFinMap states.  

   I don't know how to improve on this... except to give sortedList a 
   key and a value.

I was afraid of that.  One of the things I tried was eliminating one
of the type constructors on FiniteMap, but I found the types getting
hopelessly confused (i.e., ambiguous).  I think my intuition, or my
hope for a "cleaner" solution, is just plain wrong.

   You don't need class ZeroVal.. just use 'undefined' instead.

Thanks --- I had forgotten that.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.averstar.com/~dlb





Re: Fixing imports for and namespaces (was: Simon's H98 Notes)

1998-10-26 Thread David Barton

One more quick comment, and then I think I (at least) am done (to the
extent that the difference in opinion is clearly defined).

Fergus Henderson writes:

And, again IMHO, it is the task of the language to *define* the
encapsuation (or to allow that encapsulation to be defined), and
the job of the operating system or programming environment to
enforce it (or to trust to convention, depending).

   There's not much difference between "language implementation" and
   "programming environment", is there?

No; however, there is a world of difference between "language
implementation" and "language definition".  The two are *very*
distinct in my mind.  Note that I said (above) the job of the language
is to define; you morphed that into "language implementation".

   Above you say it is the job of the OS or programming environment to
   enforce encapsulation.  I think it should be the language
   implementation's job, but the OS should be considered a part of the
   language implementation, so letting the OS handle it would be one
   way for the language implementation to do the enforcement.

I am happy to make it part of the language implementation as long as
it does not impinge on the language definition, leaving other
implementations free to do it other ways.  Pragmas may be one way to
do this.  I simply object (and will continue to object) to making one
mechanism of encapsulation enformement part of the definition,
imposing it on all implementors.  To repeat: defining the
encapsulation is the job of the language defintion, but enforcing it
is not (and should not be).  All IMHO, of course.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.averstar.com/~dlb





Re: Fixing imports for and namespaces (was: Simon's H98 Notes)

1998-10-24 Thread David Barton

Fergus Henderson writes:

   No, different uids don't work fine in the multiprogrammer case.
   The programmer that is compiling the source code needs read access
   to all of it (for using tools like `grep', if nothing else).  Once
   he has that read access, nothing prevents him from violating the
   encapsulation.

Let's see here --- you want him to be able to do reads like grep but
not refer to the units directly?  You ask too much of *any* mechanism
(unless you restrict operations like grep and others, doing special
versions, in which case they can do the appropriate superuser stuff).
Either you can trust the programmer to follow conventions, or you
restrict any kind of read access; without that restriction, he or shee
can always make a local copy and do what he or she wills.

But even *if* you decide that some more sophisticated mechanism of
version control is necessary, it is properly an operating system
function, *not* a language function.  If you decide that a "grep" is
OK, but compile access is not, there is no reason whatever to decide
that compilation is your only "forbidden" function.  It makes a lot
more sense to add an operating system function --- say, granting
access by tool rather than by specific user or group ID --- that gets
you the finer grained access control you need.  Then you only
implement it once.  Under your suggestion, it not only gets
implemented by multiple tools, but it gets codified in the entry
format (whether a computer language or a document control system) of
multiple tools, each in a different way.  Chaos!

It all boils down to: access control to files is the responsibility of
the operating system, *not* a programming language.  The most a
language should do is make responsible conventions possible and
expressible, and Haskell can do that with a very limited extension.

2) The compiler can (and should) be intelligent enough to detect
   the importation of symbols only through non-functional
   (gather) modules.  This is not a big test to perform.

   So the compiler should enforce this convention?

No; detect it and take advantage of it.

Most of the below is variations on a theme, which make lots of sense
if you accept the basic assumption that enforcing access control
conventions is properly a function of the language feature; under that
assumption, I agree with most of it.  So:

snip

   I don't see how that helps.  Every programmer on the team may need
   to modify any file (in order to fix a bug, for example).  So all
   programmers need write access to every file.

This does not correspond to any configuration controlled project that
I have ever worked on, so I am at sea.  In my world, if you need a
file that is outside your own package, you need to go to the
configuration manager (who may or may not be the project manager) for
permission to check out the file to make the change.  This may be as
simple as an Email, or it may even require review.  Allowing the
programmer to just do it, on his own hook, is well outside any kind of
configuration control guidelines I have ever worked under.

This may be a commercial / university kind of thing.  I know of no
company that would work with these kinds of guidelines, which may
account for our different approach to this whole question.

   Well, normally I check out all the modules in the system.  With
   CVS, I type out `cvs checkout mercury' and it checks out all the
   modules in the entire system, giving me both read and write access
   to all of them.  Then I type `make' and it compiles everything.
   This is nice and simple, and it works.

We work in very different environments!

   Yes, you may be right.  But I don't think a language should require
   the use of a particular configuration management style.  And
   therefore I'm not keen on any solution which relies on the
   configuration management style rather than the language to enforce
   encapsulation.

And, again IMHO, it is the task of the language to *define* the
encapsuation (or to allow that encapsulation to be defined), and the
job of the operating system or programming environment to enforce it
(or to trust to convention, depending).  And still IMHO, Haskell can
do this definition with very little change.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.averstar.com/~dlb





Re: Fixing imports for and namespaces (was: Simon's H98 Notes)

1998-10-20 Thread David Barton

S. Alexander Jacobson writes:

   I am not sure what you mean by adding a library name.  My objection to the
   current model is that, for example with hugs, each new library requires
   you to add a path to your hugs path and hope that two different libraries
   both don't share the same module name.

   If module names were nested then you could reduce the liklihood of
   namespace collision.  Java's requirement of putting the library
   names inside the text of modules is annoying.  Use of the directory
   hierarchy strikes me as simpler.

I beg your pardon; I was inspecific.  I meant adding a library name to
the syntax of names, thus allowing a triple of library module
name for resolving the ambiguity of the name space.  Thus, modules
can have the same name where ncessary.

   It may be a good use for them, but you are describing a way to implement
   the functionality I am describing manually.  I guess it depends whether
   you typically use module organization within a package to support
   information hiding or just code organization and literate comments.
   Most of the time, if I think information hiding is very important, I want
   a different package.  Java's package system has this orientation as well.
   All classes in the same java package can access each other.

I am describing precisely that; however, I am also saying it is a good
thing, better than having it done manually.  This puts the onus on the
user to control his name space for his own package, which I maintain
is a Good Practice.  It also places this information in a specific
place, where the user can go to find out the definition of specific
names should it be necessary.

All this has little to do with whether information hiding is important
or not.  It is simply a discussion of how it is to be implemented.
Information hiding needs differ within a package from those who use
the facility as a whole.  A single module serving as a "visibility
gather point" does this just fine.

   The re-export facility is nice, but I think it is more useful for
   establishing what functions are visible to other packages then defining
   what is visible within a package.

But export is where information hiding is (or should be) defined;
otherwise it is not information hiding per se, but name space control
(the information is not hidden).  My contention is that export is
precisely where information hiding should take place, and that the
current facility provides this well.

   I am not arguing for a lot of mechanism here, just an arbitrarily deep
   directory tree in which to store modules (like Java, Ada, or Perl)

And I am arguing that arbitrarily deep is not needed, and introduces
more complexity than is worth while.  It's not a major point in my
personal system of programming beliefs, but other languages (VHDL
really does come to mind) do just fine with a triple as above.

Sure; however, this can be an artifact.  Binding library names to
directories with user variables works just fine.

   I am still not sure what you mean here.

A globally consistent name space can be produced by binding directory
names to library names known inside the Haskell modules via user
variables.  The triple is all that is needed for global consistency.

   If all import was unqualified, I'd agree with you.  But there are times
   when it is just much more convenient to import the entirety of what is
   exported from another package, especially if you can import
   qualified.

Then I would go with a separate gather module, to make specific what
you are doing.  It isn't much overhead to type.  The mechanism you
describe makes it too easy to establish a multi-headed import hydra
without thinking about it.

   True.  My proposal supports multiple gather modules.
   (One might argue that a package should have only one public gather module,
   but that strikes me as overly restrictive)

I'll accept multiple public gather modules.

   No.  Right now there is just a flat module namespace.  So there is are no
   packages to protect.  I am arguing for a distinction between keeping
   things hidden from other packages and keeping things hidden from a
   module's own package.  Without a packaging system, this disctinction is
   not meaningful.

Hmmm.  I thought that public gather modules did just this.  I have the
sneaking suspicion that I am missing something fundamental in your
proposal.  My cranial density is sometimes legendary.

   I am saying that these existing export rules should stay and apply to
   packages.

Last comment applies; either public import-export (gather) modules
provide this, or I am missing something pretty basic here.

   Modules that share the same package have more access to each other's
   contents than they do to modules of other packages.  When a module imports
   its own package in an unqualified manner ("package-" in my syntax), it
   gains access to all functions that are not declared private in all modules
   in the package.  

   

Re: Fixing imports for and namespaces (was: Simon's H98 Notes)

1998-10-19 Thread David Barton

S. Alexander Jacobson writes:

   And, as long as we are supporting more ADA-like import declarations,
   I would like to renew my campaign for more ADA or Java like
   module namespace (packages) (yes I know, this won't be in H98)
   The existing module system is:

   * too shallow for larger scale application development (because it is too
 easy for different developers to create modules with the same
 name).

True, but easily fixable with the addition of a single library name.
I have never found a reason to go much further than that, as long as
the library name can be bound appropriately.

   * a pain when you write a package with multiple modules and each module
 has to import all the other modules in the package (except itself)

Disagree here; Haskell modules can re-export, meaning that it is easy
to create "gather nodes" that combine a bunch of modules into a
single, self-consistent, externally visible name space.  A good use
for them.

   Here are the name space semantics Haskell should provide:
   1. A package is a set of modules in the same directory and 
  a cluster is a set of clusters, packages, and modules in the same
  directory

We don't need all that mechanism.  See the VHDL library / package
mechanism, for example.

   2. Packages and clusters should have a globally consistent name
  space

Sure; however, this can be an artifact.  Binding library names to
directories with user variables works just fine.

   3. Modules may import both other packages (all public declarations of all 
  modules in the package) or individual modules in other packages

*shudder*.  I disagree here; it is too easy for a user to change a
single name, produce name space pollution, and have no idea where it
came from.  Much better to impose discipline of a single "gather
module"; i.e., one that explicitly imports and reexports the
appropriate names.

This one has bitten me before.  Really.

   4  Modules may (on a block level) import qualified or unqualified

OK, I guess.

   5. Modules should have access to all functions in a package
  without knowing the module in which the function sas declared

Given by the gather modules above.

   6. Modules should be able to hide specific declarations from their
   own package

Explicit export?  Don't we have that?

   7. Module level exports should define visibility to other packages

I am not quite sure what you mean by this.

   I suggest the following (strawman) implementation syntax:
   1. Module/package names should have a java-like import syntax e.g.
  com-numeric_quest-fun_pdf (for Jan's haskell/pdf library) and map to
  a directory structure (off of the source/lib path) of
  com/numeric-quest/fun-pdf

As much as I dislike getting into syntax wars, I've never found the
java syntax here particularly salubrious.  It is too busy, and tends
to make the task of finding like names by scanning the text
difficult.

   2. clusters and packages as functions
   Generalize import.  Treat clusters, packages, and modules as functions
   which take a name and return a cluster, package, module, or function.
   Since we don't want all names in all paths to pollute module name space, I
   suggest using the "-" operator.  e.g. 

myFun =\x - com-numeric_quest-fun_pdf-somePDFFunction x

   Use of - in function definitions outside or on the right of lambda
   expressions tells the compiler/interpreter to resolve an import.

   You run into a slight problem with - polluting the type namespace,
   but clusters and package names are functions and are therefore lower case.

myFun::Int - (com-numeric_quest-fun_pdf-PDFDocument)

   (If you want, you can interpret the type of myFun as taking an Int and
   the fun_pdf package and returning a PDFDocument i.e. linking to
   fun-pdf may happen at runtime)

   Import renames work like function names:
funpdf = com-numeric_quest-fun_pdf

   If you want to combine the current namespace with the namespace of another
   module or package, instead of the keyword "use", you do:

com-numeric_quest-
 myFun = \x - fun_pdf-somePDFFunction x -- fun_pdf is inside numeric_quest
 myVal = pdfConstant
 fun_pdf- -- downscope
  myFun2 = \x - anotherPDFFunction x -- anothterPDFFunction is in fun_pdf module

   This should feel vaguely like using lambda expressions.

Is this basically an implementation suggestion?  I'll leave it to
Simon and othersto comment.

   3. Accessing the current package with "package"
   To import the contents of the current package use,
package-
 myVal = somePackageFunction 3

   "package" is a special keyword to denote the current package.

Also applicable to the current library.

   4. To hide specific declarations from the surrounding package,
  declare them Private e.g.
  Private myConst=2 (or would a private section be better?)

Controls on name export control are fine here, and the present module
system 

Re: Haskell in Scientific Computing?

1998-10-16 Thread David Barton

Simon Peyton-Jones writes:

 Another approach is to compete not head-to-head on speed, but on
 cunning.  Get a good library of numeric procedures (e.g. Mathlab),
 interface them to Haskell, and use Haskell as the glue code to make
 it really fast to write complex numerical algorithms.  99% of the
 time will still be spent in the library, so the speed of the Haskell
 implementation is not very important.  This looks like a jolly
 productive line to me.

I don't know if it is better to go with a commercial product here
(like Mathlab) or one of the semi-public domain (Reduce) or wholly
public domain tools here.  It would be a shame if Haskell were
publically available but the thing that made it useful for scientific
computing was not.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb





Re: One more bug (I hope)

1998-09-02 Thread David Barton

Done; that did it.

Many, *many* thanks.

Once more, guided through the maze

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



Re: One more bug (I hope)

1998-09-01 Thread David Barton

Will do.  I assume this does not require a recompile, just a reinstall?

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



Re: One more bug (I hope)

1998-09-01 Thread David Barton

OK, done.  I have applied the patch, run gmake all, then gmake
install.  I now get the following error from the final (ld) step:

  /usr/src/ghc/lib/lib/libHSrts.a(Printer.o): In function `DEBUG_LoadSymbols':
  /usr/src/ghc/fptools/ghc/rts/Printer.c:623: undefined reference to `bfd_init'
  /usr/src/ghc/fptools/ghc/rts/Printer.c:624: undefined reference to `bfd_openr'
  /usr/src/ghc/fptools/ghc/rts/Printer.c:628: undefined reference to 
`bfd_check_format_matches'
  make: *** [main] Error 1

Still working on things.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



One more bug (I hope)

1998-08-31 Thread David Barton

OK, I have compiled and installed ghc-4.00 on Linux Redhat 5.0.  When
linking, inlcuding -syslip posix, I got the following ld error:

ld: cannot open -lnot-installed: No such file or directory

Obviously, not-installed is not mentioned in any of my command lines.

Any assistance would be appreciated.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



Segmentation Fault

1998-08-19 Thread David Barton

Working on a Linux Redhat 5.0 machine, running version 3.02 patchlevel
0, I have run across two problems:

1) A program that seems correct (at least, it compiles and runs under
   Hugs) dies with a segmentation fault and dumps core.

2) Somehow, programs cannot find the Posix library.  I am not sure
   why; as I recall, the compilation of the libraries went just fine,
   and the posix directory exists under the lib/imports directory,
   with all the nice litte .hi files there.

I would be happy to either attempt some rudimentary debugging under
tutelage, or just tar and zip the whole damn thing up and send it to
any interested party if that is better.

Thanks for any help that I can get with this.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



Re: Segmentation Fault

1998-08-19 Thread David Barton

Sigbjorn writes:

   Looks bad, could you tarzip the program up so that we can have a look
   at it?

On the way to you, under separate cover (why burden the list?).

   I don't know what might be causing this, could you provide the output
   of compiling one such module with -v?

Found it (why does it always happen just *after* I call for help) ---
lines out of order in the Makefile.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



Re: Press Release

1998-04-01 Thread David Barton

We have been assimilated!!!

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb





Anomaly in IO in GHC (I think)

1998-03-25 Thread David Barton

Consider the following (literate) program:

 module Main where
 import IO

 main:: IO()
 main = hSetBuffering stdin NoBuffering  
interact trns

 trns:: String - String
 trns [] = []
 trns (c:cs) = 
   let str c = case c of
 '1' - "one\n"
 '2' - "two\n"
 '3' - "three\n"
 _   - "other\n"
   in (str c) ++ (trns cs)

This compiles under both Hugs and GHC appropriately (note that I added
a blank "hSetBuffering" defintion to IO.hs for Hugs).  When I run the
program under Hugs and enter press the keys "1234" on the keyboard I
get the following output:

one
two
three
other

which is just what I expect.  On the other hand, when I try it under
GHC it compiles appropriately and I get the following output:

1one
2two
3three
4other

i.e. the input is somehow echoed to stdout without my trying to do
anything.  Is this a Unix thing?  If so, why didn't it happen under
Hugs?  Is it a GHC thing?  Is it controllable?  If so, how can I stop
it?

Any help gratefully appreciated.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



Anomaly in IO in GHC (I think)

1998-03-25 Thread David Barton

Consider the following (literate) program:

 module Main where
 import IO

 main:: IO()
 main = hSetBuffering stdin NoBuffering  
interact trns

 trns:: String - String
 trns [] = []
 trns (c:cs) = 
   let str c = case c of
 '1' - "one\n"
 '2' - "two\n"
 '3' - "three\n"
 _   - "other\n"
   in (str c) ++ (trns cs)

This compiles under both Hugs and GHC appropriately (note that I added
a blank "hSetBuffering" defintion to IO.hs for Hugs).  When I run the
program under Hugs and enter press the keys "1234" on the keyboard I
get the following output:

one
two
three
other

which is just what I expect.  On the other hand, when I try it under
GHC it compiles appropriately and I get the following output:

1one
2two
3three
4other

i.e. the input is somehow echoed to stdout without my trying to do
anything.  Is this a Unix thing?  If so, why didn't it happen under
Hugs?  Is it a GHC thing?  Is it controllable?  If so, how can I stop
it?

Any help gratefully appreciated.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb





Re: FP Naming/Directory Service

1998-03-25 Thread David Barton

S. Alexander Jacobson writes:

   The difficulty is that we typically develop on Windows and Linux
   and deploy on linux or solaris.  The system I am working on
   involves using CGI/servlets to update a directory server and then a
   Java based produciton system (Jess, a CLIPS clone), to process the
   directory information.  Javasoft doesn't appear to have implemented
   an ADSI client.

   In general, IDL/CORBA would be prefereable to COM because it is
   more cross-platform (specifically it can operate as a client to
   Java servers).

Strongly, *strongly* agree --- and not just because of Java.  I know
that Microsoft has an incredible share of the market; however, using
standard formats rather than proprietary ones is always of benefit if
we are doing things cross-platform (which many of us do).

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb





JOB OFFERING

1998-02-20 Thread David Barton

 JOB OFFERING

Intermetrics, Inc. in Vienna, Virginia is looking for a Masters level
or equivalent software engineer with experience in functional
programming and functional languages, particularly lazy functional
languages.  Experience in and familiarity with Domain Specific
Embedded Languages (DSELs), parsers written in functional languages,
and monads is desired.  Any experience with formal methods, electronic
design and/or test, and type systems is a plus.  We use Haskell here,
but aren't bigoted about it; those with experience in other functional
languages and systems are encouraged to apply.

Please send your resume by Email (postscript, pdf, or ascii preferred)
to [EMAIL PROTECTED], or to the following address:

Intermetrics, Inc.
1595 Jones Branch Drive, Suite 600
Vienna, VA 22182

Lots of interesting work here for the right person.


Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb





Thanks

1998-02-12 Thread David Barton

Well, after keeping on going and getting a few more errors of the same
type, which I corrected without further guidance, I have gained two
things:

1) A much greater appreciation for the complexity of configuration
   files and the messiness of Unix compatibility.

2) Gratitude to the Haskell team for some *very* responsive answers.

Thanks a million, guys.  I don't envy you the task of cleaning up the
configuration procedure when the latest gnu library structure arrives
over there.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



Yet another problem......

1998-02-11 Thread David Barton

Having added the caddr_t definition to the two files, things chugged
along famously for a bit.  However, down around absCSyn way, another
glitch occurred:

  ghc -DOMIT_NATIVE_CODEGEN -cpp -fglasgow-exts -Rghc-timing -I. -IcodeGen -InativeGen 
-Iparser 
-iutils:basicTypes:types:hsSyn:prelude:rename:typecheck:deSugar:coreSyn:specialise:simplCore:stranal:stgSyn:simplStg:codeGen:absCSyn:main:reader:profiling:parser
 -recomp -c absCSyn/CLabel.lhs -o absCSyn/CLabel.o -osuf o

  absCSyn/CLabel.lhs:315: Value not in scope: `fmtAsmLbl'

  absCSyn/CLabel.lhs:319: Value not in scope: `underscorePrefix'

I included the entire command line on purpose; I wanted to establish
that OMIT_NATIVE_CODEGEN is indeed present.  This is because, in
CLabel.lhs, the following lines occur:

  #if ! OMIT_NATIVE_CODEGEN
  import {-# SOURCE #-} MachMisc ( underscorePrefix, fmtAsmLbl )
  #endif

Thus, when OMIT_NATIVE_CODEGEN is defined, the two values in question
are not defined.  Down farther in CLabel, I find that the two
references in question are *not* enclosed in a similar if statement;
the operative portion is:

  \begin{code}
  -- specialised for PprAsm: saves lots of arg passing in NCG
  #if ! OMIT_NATIVE_CODEGEN
  pprCLabel_asm = pprCLabel
  #endif

  pprCLabel :: CLabel - SDoc

  pprCLabel (AsmTempLabel u)
= text (fmtAsmLbl (showUnique u))

  pprCLabel lbl
= getPprStyle $ \ sty -
  if asmStyle sty  underscorePrefix then
 pp_cSEP  pprCLbl lbl
  else
 pprCLbl lbl

Therfore, the error.  Since it looks like a real, honest to goodness
error I am not sure what to do about it.  Any suggestions, anyone?

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



Re: The latest in a continuing saga......

1998-02-11 Thread David Barton

Well, let me try to reply to both of these at once, just to keep
everyone up to date.  Sven Panne writes:

   Looks like an old friend of mine (problems on HP-UX some releases ago).
   The problem is fptools/ghc/lib/cbits/timezone.h. It tries to be
   clever about handling time info on different Unices. What does
   "grep zone fptools/config.cache" yield? Here the output from our
   Linux boxes:

  ac_cv_altzone=${ac_cv_altzone=no}
  ac_cv_struct_tm_zone=${ac_cv_struct_tm_zone=no}
  ac_cv_type_timezone=${ac_cv_type_timezone=time_t}

Well, mine reads as follows:

  ac_cv_altzone=${ac_cv_altzone='no'}
  ac_cv_struct_tm_zone=${ac_cv_struct_tm_zone='yes'}
  ac_cv_type_timezone=${ac_cv_type_timezone='time_t'}

so there is a difference.

   Perhaps configure doesn't handle the new Redhat distribution correctly.
   "POSIX-man, we need your help!!"

As seen below, this appears to be the case.

Sigbjorn Finne writes:

   Hmm.. (struct tm) with the new fangled GNU libc doesn't appear to
   have tm_gmtoff and tm_zone - a bit odd. To work around this until we
   have a proper fix, make sure that this is the case (look in
   sys/time.h or time.h .) If it is, comment out the use of ZONE,
   GMTOFF and SETZONE in Time.lhs. (You'll have to do similar edits when
   you get on to compiling ghc/lib/cbits )


When I check out time.h, I find the definitions of tm_zone and
tm_gmtoff are embedded in that same damn if statement based on
__USE_BSD.  Almost all of these things seem to be offshoots of the
same damn problem.  Unfortunately, I am not sure how to paste these
definitions into a Haskell file; I think that it must go through a
couple of levels of indirection.  So I'll have to do the commenting
out solution that you suggested.

Sigh.  I keep on plugging.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



Re: The latest in a continuing saga......

1998-02-11 Thread David Barton

Sigbjorn writes:

   That is probably the simplest thing to do until we get a grasp on
   changes made in the version of GNU libc2 that RH5.0 ships with. If
   you're willing to experiment, doing 

 foo% cd ghc/lib
 foo% make libHS.a required/Time_HC_OPTS=-optc-D__USE_BSD

   may (or may not) help.

Nope; why, I don't know.

Sometimes the direct approach is best.  I know that you aren't
supposed to do it, that stuff in /usr/include should never be touched;
however, I just changed the damm definition in time.h.  Heck with
it.  This made it through.  I hereby swear to all that I'll go change
it back when the compile is done.

*sigh*.  Thanks for all the support, and the suggestions.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



The latest in a continuing saga......

1998-02-11 Thread David Barton

Oh, dear.  This one I don't even know how to *start* with.

When compiling the library, the compile crashes with the following:

  rm -f required/Time.o ; if [ ! -d required/Time ]; then mkdir required/Time; else 
find required/Time -name '*.o' -print | xargs rm -f __rm_food ; fi ; 
  ../../ghc/driver/ghc -recomp -cpp -fglasgow-exts -fvia-C -Rghc-timing -O -split-objs 
-odir required/Time  -monly-3-regs -H16m -H8m  -c required/Time.lhs -o required/Time.o 
-osuf o
  ghc: ignoring heap-size-setting option (-H8m)...not the largest seen
  ghc: ignoring heap-size-setting option (-H8m)...not the largest seen

  NOTE: Simplifier still going after 4 iterations; bailing out.

  NOTE: Simplifier still going after 4 iterations; bailing out.
  ghc: 1299013876 bytes, 690 GCs, 10917323/11954952 avg/max bytes residency (53 
samples), 0.02 INIT (0.01 elapsed), 164.85 MUT (179.05 elapsed), 183.51 GC (186.36 
elapsed) :ghc
  ghc: module version unchanged at 1
  /usr/src/ghc/tmp/ghc8242.hc:35545: structure has no member named `tm_zone'
  /usr/src/ghc/tmp/ghc8242.hc:36256: structure has no member named `tm_zone'
  /usr/src/ghc/tmp/ghc8242.hc:36304: structure has no member named `tm_gmtoff'
  /usr/src/ghc/tmp/ghc8242.hc:37074: structure has no member named `tm_zone'

Given that the names listed don't even occur in Time.lhs, and the file
"ghc8242.hc" doesn't exist in the tmp directory when I try to look at
it, I am sunk without a trace.  Any suggestions?


Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



Re: Updated to Redhat 5.0, and now dead.....

1998-02-10 Thread David Barton

Simon Marlow writes:

1) The new library seems to use __USE_BSD rather than __FreeBSD__; I
   had to change one "ifndef" to get it through "gmake boot".

   Huh?  Which library?

Just about all of them; hopping into /usr/include and doing a "grep
USE_BSD" gives a whole mess of files: grp,h, math.h, netdb.h,
setjmp.h, signal,h, stdio.h, stdlib.h, string.h, termios.h, unistd.h.
Similarly, doing a "grep FreeBSD" in the same directory produces
nothing.

Let me be a bit more specific: during "gmake boot" the compile crashed
with the following message:

makeinfo.c: In function `cm_infoinclude':
makeinfo.c:6344: conflicting types for `sys_errlist'
/usr/include/stdio.h:221: previous declaration of `sys_errlist'

The definition in "makeinfo.c" surrounds this definition with a
"ifndef __FreeBSD__" test.  The definition in /usr/include/stdio.h is
surrounded by a "ifdef __USE_BSD" test.  The two were sufficiently
similar that I theorized that the name of the flag had changed
somehow.  Changing the name of the if statement in makeinfo.c to
"ifndef __USE_BSD" allowed it to continue to compile.

2) The full pathname to cpp had changed; I had to modify "mkdependHS"
   by hand.

   re-configure instead: other things may have changed.

Is it safe to copy the results of the "gmake boot" into my binary
directory?  The problem is that the make file is going off of the
"mkdependHS" in the previous build, which is in installed in my binary
directory there.

Then I got to the biggie; I am not sure what to do here.  The "gmake
boot" gives me lots and *lots* of errors of the following form:

/usr/include/sys/uio.h:37: macro or `#include' recursion too deep
/usr/include/sys/uio.h:39: macro or `#include' recursion too deep
/usr/include/sys/uio.h:47: macro or `#include' recursion too deep
/usr/include/sys/uio.h:49: macro or `#include' recursion too deep

This happened on lots of files.  I suspect that, because of this, I
get some undefined constant errors:

main/Signals.c:143: (Each undeclared identifier is reported only once
main/Signals.c:143: for each function it appears in.)
main/Signals.c:143: parse error before `addr'
main/Signals.c:150: `addr' undeclared (first use this function)
main/Signals.c:150: parse error before `stks_space'

   We don't have any RedHat 5 machines here (although apparently we're
   going to be upgrading soon) so I can't reproduce this yet.  

   Your best bet is to try to track this down yourself - find out which
   type in Signals.c isn't recognised (10-1 its a typedef that's either
   changed name in RedHat 5 or moved to a different header file).  I'm
   not sure about the uio.h problem, it could be that one of our macros
   is conflicting with a system header file.

Dumb me; I cut off the error one line too soon.  The previous line
said:

main/Signals.c:143: `caddr_t' undeclared (first use this function)

which gives the name of the undeclared variable.  Tracking this down
was a bit of a task; however, I found it in "sys/types.h", surrounded
by a "ifdef __USE_BSD" test.  So, fine; I know that "__USE_BSD" is not
defined here.  On the other hand, I don't know what to do about it.  I
suppose I could simply take that definition out of the loop in the
types.h file; however, I don't know what else this would affect.  I
could follow the "makeinfo.c" example and put the definition inside of
"Signals.c", surrounded by an "ifndef __USE_BSD" statement; however,
where else will it crop up?

So, again, I am at a loss; I don't quite know what to do.  Any
suggestions?

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



Updated to Redhat 5.0, and now dead.....

1998-02-09 Thread David Barton

Sigh.  I upgraded to Redhat 5.0 between GHC 2.10 and GHC 3.00.
Catastrophe!!  Ah, well, I knew things had been going too well.

I fixed a couple of errors:

1) The new library seems to use __USE_BSD rather than __FreeBSD__; I
   had to change one "ifndef" to get it through "gmake boot".

2) The full pathname to cpp had changed; I had to modify "mkdependHS"
   by hand.

Then I got to the biggie; I am not sure what to do here.  The "gmake
boot" gives me lots and *lots* of errors of the following form:

/usr/include/sys/uio.h:37: macro or `#include' recursion too deep
/usr/include/sys/uio.h:39: macro or `#include' recursion too deep
/usr/include/sys/uio.h:47: macro or `#include' recursion too deep
/usr/include/sys/uio.h:49: macro or `#include' recursion too deep

This happened on lots of files.  I suspect that, because of this, I
get some undefined constant errors:

main/Signals.c:143: (Each undeclared identifier is reported only once
main/Signals.c:143: for each function it appears in.)
main/Signals.c:143: parse error before `addr'
main/Signals.c:150: `addr' undeclared (first use this function)
main/Signals.c:150: parse error before `stks_space'

when I do "gmake all" after the "gmake boot".  I am using Linux under
Redhat 5.0, with all the errata packages installed.

Any suggestions?

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



Re: Ambiguous Type Error

1998-01-05 Thread David Barton

Simon:

Thanks a million; that's just what I needed.

Yes, I know the context on the type doesn't do anything.  I just got
in the habit of putting them in when I thought they did, and I haven't
trained myself out of it yet.  It seems useful documentation, if
nothing else; however, if Standard Haskell nukes them I won't cry too
much.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb






Ambiguous Type Error

1997-12-22 Thread David Barton

I have enclosed below a test file that causes an error that puzzles
me.  Both GHC and Hugs kick it out, so at least they agree; however, I
must admit that I don't understand it.

GHC gives the following error:

test.hs:1: Ambiguous context `{Physical taZ0}'
   `Physical taZ0' arising from use of `pulse' at test.hs:50
When checking signature(s) for: `example2'

Hugs, on the other hand, gives:

ERROR "test.hs" (line 48): Ambiguous type signature in inferred type
*** ambiguous type : (Physical a, Physical b) = BasicSignal Time Voltage
*** assigned to: example2

Again, this puzzles me.  When I start query types, I get:

:t pulse
pulse :: (Physical b, Physical a) = BasicSignal a b

There is something about field names here that causes this ambiguity;
as might be expected, 

:t Pulse
Pulse :: (Physical b, Physical a) = a - a - b - BasicSignal a b
:t example1
example1 :: BasicSignal Time Voltage

neither of which is a great surprise.  A hint is found by the
following two type queries (I have added two carrige returns for
clarity):

:t pulse{start_time=(Sec 1.0),pulse_width=(Sec 3.0),amplitude=(V 2.0)}
pulse{start_time = Sec 1.0, pulse_width = Sec 3.0, amplitude = V 2.0} :: 
  (Physical a, Physical b) = BasicSignal Time Voltage
Test :t Pulse{start_time=(Sec 1.0),pulse_width=(Sec 3.0),amplitude=(V 2.0)}
Pulse{start_time = Sec 1.0, pulse_width = Sec 3.0, amplitude = V 2.0} :: 
  BasicSignal Time Voltage

Can anyone help here?  Why is the context carried over for pulse?  Are
partially resolved records usable in the presence of polymorphic
types?

Thanks for any and all help.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb

module Test where

class Physical a where
  toPhysical:: Float - a
  fromPhysical:: a - Float

class Signal s where
  toSig:: (Physical a, Physical b) = (s a b) - a - b

data Time = Sec Float
data Voltage = V Float

instance Physical Time where
  toPhysical x = Sec x
  fromPhysical (Sec x) = x

instance Physical Voltage where
  toPhysical x = V x
  fromPhysical (V x) = x

data (Physical indep, Physical dep) = BasicSignal indep dep = 
Pulse {start_time::indep,
   pulse_width::indep,
   amplitude::dep}

instance Signal BasicSignal where
  toSig Pulse{start_time,pulse_width,amplitude} = 
let
  st = fromPhysical start_time
  pw = fromPhysical pulse_width
  zero = toPhysical 0.0
  chk t = 
let ft = fromPhysical t
in if ft  st then zero
   else if ft  (st + pw) then amplitude
   else zero
in chk

pulse:: (Physical a, Physical b) = BasicSignal a b
pulse = Pulse{start_time = toPhysical 0.0}

example1:: BasicSignal Time Voltage
example1 = Pulse {start_time = (Sec 1.0),
  pulse_width = (Sec 3.0),
  amplitude = (V 2.0) }

example2:: BasicSignal Time Voltage
example2 = pulse {start_time = (Sec 1.0),
  pulse_width = (Sec 3.0),
  amplitude = (V 2.0) }






Quick report

1997-12-04 Thread David Barton

Something of little interest to most: rebuilding the Haskell 2.09 for
Linux worked fairly well.  The only slow-up was the fairly frequent
necessity to allocate more heap for specific modules.  A list of the
modules that failed, and the heap I had to allocate for them, is:

ghc/compiler/rename/RnExpr: 8m
ghc/lib/ghc/PrelTup: 10m
ghc/lib/ghc/PrelRead: 12m
ghc/lib/ghc/IOHandle: 10m
ghc/lib/required/Complex: 8m
ghc/lib/required/Time: 16m
ghc/hslibs/contrib/src/Cubic_Spline: 8m
ghc/hslibs/contrib/src/SetMap: 8m

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



Re: Haskell equiv of ML-Lex/ML-Yacc ?

1997-11-24 Thread David Barton

Thomas Johnsson writes:

   Q: does anyone know if there's a port of this stuff to Haskell?
   Note that I'm not after a nondeterministic SLR parser (Ratatosk),
   or some such  For pedagogical reasons I'd lite the tools
   to be as similar as possible to Yacc/Bison/ML-Yacc, etc.

I am using Alex and Happy.  I don't know how "similar" they are; close
in spirit, I think.  At least, I find that I use the same "mental
muscles" using these tools that I used to use with Lex and Yacc.


Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb






Pat on the back

1997-10-01 Thread David Barton

This isn't a bug; quite the opposite.  But I've been so common here in
my comments and reports that I really must report success.

With the last release, GHC 2.07 fully self-compiles on a Linux box.  I
first compiled it with 0.29 (as I have had to before), and then
compiled itself.  Where it always died before with a seg fault or some
other error, this time the compile went flawlessly.  It also works on
the projects that I am concerned with here.

For others, this may be old news.  To me, it is a major step.  I want
to offer my thanks and congratulations to the Glasgow team.  Way to
go!  Fantastic stuff!

The make system still leaves a bit to be desired; I append my comments
below, which I made while I was nursing the process through to
completion.  However, this should not conceal the basic fact of
success.  It is a footnote, and properly goes there.

Yea, team!!!

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb

Notes on installing GHC 2.07 on a Linux machine using ghc-0.29:

There are still problems with Happy in the build file.  The present
file attempts to compile happy with 0.29, and then crashes.  Fixed by
removing it from the ProjectsToBuild variable and adding it in later.
At least it got though "gmake boot" this time!

Moreover, there are problems with happy in "gmake install".  Even with
happy in the "ProjectsToBuild" variable, it did not install.  Since it
is a single binary, I just copied it by hand.  I include the line from
my "build.mk" file immediately following:

ProjectsToInstall += literate happy

One other problem: "gmake clean" dies with ten thousand "rmdir:
directory XXX non-empty".  I include the entire line of messages
below.  This is a real pain of a problem; it means that I have to
delete the fptools shadow directory in order to remake the compiler,
rather than just doing a make clean.

rmdir: ghc/ArrBase: Directory not empty
rmdir: ghc/ConcBase: Directory not empty
rmdir: ghc/GHCerr: Directory not empty
rmdir: ghc/GHCmain: Directory not empty
rmdir: ghc/IOBase: Directory not empty
rmdir: ghc/IOHandle: Directory not empty
rmdir: ghc/PackBase: Directory not empty
rmdir: ghc/PrelBase: Directory not empty
rmdir: ghc/PrelIO: Directory not empty
rmdir: ghc/PrelList: Directory not empty
rmdir: ghc/PrelNum: Directory not empty
rmdir: ghc/PrelRead: Directory not empty
rmdir: ghc/PrelTup: Directory not empty
rmdir: ghc/STBase: Directory not empty
rmdir: ghc/UnsafeST: Directory not empty
rmdir: required/Array: Directory not empty
rmdir: required/CPUTime: Directory not empty
rmdir: required/Char: Directory not empty
rmdir: required/Complex: Directory not empty
rmdir: required/Directory: Directory not empty
rmdir: required/IO: Directory not empty
rmdir: required/Ix: Directory not empty
rmdir: required/List: Directory not empty
rmdir: required/Locale: Directory not empty
rmdir: required/Maybe: Directory not empty
rmdir: required/Monad: Directory not empty
rmdir: required/Numeric: Directory not empty
rmdir: required/Prelude: Directory not empty
rmdir: required/Ratio: Directory not empty
rmdir: required/System: Directory not empty
rmdir: required/Time: Directory not empty
rmdir: glaExts/ByteArray: Directory not empty
rmdir: glaExts/Foreign: Directory not empty
rmdir: glaExts/GlaExts: Directory not empty
rmdir: glaExts/MutVar: Directory not empty
rmdir: glaExts/MutableArray: Directory not empty
rmdir: glaExts/ST: Directory not empty
rmdir: concurrent/Channel: Directory not empty
rmdir: concurrent/ChannelVar: Directory not empty
rmdir: concurrent/Concurrent: Directory not empty
rmdir: concurrent/Merge: Directory not empty
rmdir: concurrent/Parallel: Directory not empty
rmdir: concurrent/SampleVar: Directory not empty
rmdir: concurrent/Semaphore: Directory not empty

Files that needed more heap:

lib/ghc/PrelTup
lib/ghc/ArrBase
lib/ghc/PrelRead
lib/ghc/IoHandle
lib/required/IO
lib/required/Complex
lib/required/Time
hslibs/ghc/src/SocketPrim
hslibs/hbc/src/Number
hslibs/contrib/src/Cubic_Spline
hslibs/contrib/src/SetMap

Files that needed more heap with 2.07

ghc/compiler/rename/RnExpr

However, I also record final success; GHC 2.07 self-compiles.
Glorious!



Re: Standard Haskell

1997-08-22 Thread David Barton

I *strongly* agree with John.

Let's not even *talk* about "official" standardization until we get
Haskell 1.5 (nominally, "Standard" Haskell) done.

Then, and only then, will the question of "official" standardization
become (perhaps!) relevant.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb






Re: Standard Haskell

1997-08-22 Thread David Barton

Hans Aberg writes:

   At 07:10 97/08/22, David Barton wrote:
   Let's not even *talk* about "official" standardization until we get
   Haskell 1.5 (nominally, "Standard" Haskell) done.

 I believe we should keep the First Amendment. :-)

First Amendment?  Heck, if you even *think* about it, the Thought
Police will come breaking in your door!!! :-) :-)

Try it, and see..

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb






Re: Standard Haskell

1997-08-21 Thread David Barton

Fergus Henderson writes:

   ISO is the same.  But standards don't get updated every five years.
   Rather, each standard must be _reconsidered_ every five years.  One of
   the possible results is for the standard to be reapproved unchanged.
   If the standards committee does decide that the standard should be
   changed, then it will start a new project to produce a revised version
   of the standard.  This process itself takes years.  So typically
   language standards get updated less than once every ten years.

   Fortran: 66, 77, 90.
   COBOL: 74, 85
   Ada: 83, 95.
   C: 89, 9X.  
  (Original standard in '89, currently undergoing revision;
   revised standard, tentatively titled "C9X" due in 99, but
   might not happen until 2000 or later.)

True.  Others have a greater velocity of change, particularly if they
are newer; VHDL, for example.

   However, standards committees can publish normative amendements
   in the intervening periods.  For example, there have been some
   normative amendments to the C standard since 89 (relating to
   internationalization and numerical extensions).

There are actually several options here.  A "normative amendment" is
essentially (in IEEE land) the same as a reballot; it just doesn't
require the document to be reprinted.  The VHDL committee produced a
"sense of the working group" report that, while not officially
normative, gave the resolution to several ambiguities and the like.

   This is not _necessarily_ true.  For example, the ISO Ada 95 standard
   is freely available on the net.

It all depends on who gets the money.  In this case, the AJPO *paid*
for the free availability.

   However, convincing ISO of this would be a significant hurdle to
   overcome.

Agreed; perhaps impossible.

   In any case, I agree with Dave Barton that ISO standardization for
   Haskell should not be considered until after the current effort
   at defining "Standard Haskell" is complete.

Even if then.


Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb






Re: Standard Haskell

1997-08-21 Thread David Barton

Hans Aberg writes:

   I do not think that the Pascal standardizing model is being used
   anymore; instead one schedules a new revision, say every five years
   (this is used for C++). There is already an application put in for
   ISO/ANSI standardizing of Java, and I think Java is younger than
   Haskell. So I think the question should at least be investigated;
   perhaps it is the developed Standard Haskell that should be made
   ISO/ANSI.

Having been through the standardization wars many times, perhaps I
should interject here.  Virtually all of my experience has been within
the IEEE process, although IEEE standards are often upgraded to ANSI
and ISO standardization fairly quickly, with only an "up/down" vote
(it is *not* automatic, however; Verilog was rejected).

The IEEE *requires* restandardization every five years.  If another
ballot is not taken, than the standard is dropped.

Standardization does not particularly guarantee stability.  It does
guarantee three things:

1) A certain process has been followed.  ANSI and ISO have rules about
the process to be followed by organizations that submit standards to
them for approval, such as the IEEE and the EIA.  This includes
openness (anyone wishing to particpate in the process and the ballot
may) and a certain level of approval.

It also assures *lots* of bureaucracy.  And I mean lots.  More than
that.  No, even more than that.  Lots more.

2) The final result is independent, and non-proprietary.  This can be
more or less true; Verilog is, again, a good counterexample.  This is
not a worry with Haskell; I don't know *anyone* who thinks that
Haskell is the result of a single university or company.

3) It also means (cynically) that the standardization organization
makes money off of the standard's publication.  If we were to
standardize Haskell, the copyright of the document would have to be
transferred to the standardization organization.  This means that we
could no longer distribute the Haskell Report free on the net, and
with every download.

Think about this.  Really.  No more free downloads of the Report.
(The Library is a sticky issue, which we are fighting within the IEEE
now with respect to the standard VHDL packages for some of the related
standards.  If anyone is interested in the result, I'll start posting
what I hear on this list.)

   I would rather think that the reason that functional languages are
   not used is the lack of an ISO/ANSI standard, plus the lack of
   standard ways of making cooperation with other, imperative
   languages.

I must disagree here.  After having been in the standardization
business for a while, I don't think that standardization means that
much to widespread usage.  WAVES is a good counterexample in the field
of digital CAD.  It does have *some* positive effect, but this really
is limited.  There are *lots* of standards that are nothing more than
pieces of paper on the IEEE's bookshelves.

I don't want to sound too pessimistic here.  I wouldn't spend so much
of my professional time in standardization efforts if I didn't think
it was worth while.  There are some things that standardization
brings.  In particular, don't look for widespread use of Haskell on
government software without an officially recognized standard in
place.  It also does give commercial vendors some feeling that there
may be a market there, or at least some interest.  And, frankly, it
would make my life a whole lot easier; I am creating a standard that
is based in Haskell and defined by a series of Haskell files.  At this
point I am planning on simply referring to the Haskell report as
prior, and existing, art; however, I am expecting to take some flak on
that.  It would be vastly easier for me if I could point to a standard
instead.

But standardization is a two edged sword.  It does take up a *lot* of
time.  The common mode of standard creation these days is to go to a
standardization body with something close to ready, and hope for few
changes and a relatively painless ballot.  Haskell is certainly ready
for this kind of effort.  But expect some change, and a two to two and
half year process before the ballot comes in.

I would be happy to answer further questions about the IEEE
standardization process if it is relevant, and what I know about other
standardization processes.  But I don't want to get too bogged down in
it.  Certainly we should not approach a standardization organization
until the present effort for creating Standard Haskell is complete.
Therefore, it should not take up much of our time.

One more note: frankly, you (the members of this list) don't entirely
control the question.  Given that the Haskell report is freely
available, *anyone* can submit it to one of the standardization
organizations to start the process if they wish.  Of course, this need
not affect what the compiler builders here do (and probably wouldn't).
Nevertheless, the question need not necessarily concern us here; if
someone feels strongly enough to 

Re: GHC 2.05 bug

1997-08-20 Thread David Barton

Third in a series on the same set of files.

Thanks to Sigbjorn Finne's patch, the entire program now
compiles. However, the produced program now dies with a segmentation
fault.  The text (which will probably help not one whit) occurs as
follows:

Segmentation fault caught, address = 38209318
IOT trap/Abort (core dumped)

This will probably be taken to Email now, but I wanted to let people
know how things are going.  I note that the 2.05 compiler that I
attempt to build using 2.05 seg faults as well, so this is probably a
more general problem.

I am *more* than willing --- eager --- to recompile with debugging
flags on and the like, saving the code produced and trying to figure
out where the seg fault is occurring.  I will work with anyone that I
can to make this succeed.  I recognize this is probably unique to the
Linux boxes that I am using, and therefore am willing to put a fair
amount of effort into finding the problem.

It is also worth noting that at least *one* of the files that the
program produces was created, so that some of the code is working.

Many, many thanks for past fixes.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



Re: Native Mode on Linux i386?

1997-07-23 Thread David Barton

I haven't had time to download his binary distribution and try to
recompile 2.04 yet.  When I do, I'll let you know how it went.

Sometimes I just don't get the time to do the things I *want* to do.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb



Re: Thoughts on Records and Field Names

1995-11-21 Thread David Barton


After having posted on records, I decided to give them a try with a
real example.  So I constructed a balanced binary tree (given the
recent questions).  I decided to extend a bit into the "unionized"
record territory by at least marking the null tree with a constructor,
and see how things went.

I also decided to drive my implementation as far as possible towards
the "implementation independent" side of things.  Therefore, I wanted
to make all of my functions totally independent of the actual
implementation of the record, and only use functions provided by the
"record class" and the "field name classes".  The result was
interesting, and is recorded below.

A tree is either nil or a full record.  The record looks like:

key: a;
info: b;
lchild: tree;
rchild: tree;
balance: Flag;

Here, I reproduce the fieldname class from my last post:

 class FieldName a f b where
   set:: a - f - b - a
   get:: a - f - b

And the type of my balance flag

 data BalFlag = Balanced | Lheavy | Rheavy
 instance Eq BalFlag where
   Balanced == Balanced = True
   Lheavy == Lheavy = True
   Rheavy == Rheavy = True
   _ == _ = False

In addition to this, I will create classes for each of the fields in
the record:

 data Key = Keyderiving Text
 data Info = Info  deriving Text
 data Lchild = Lchild  deriving Text
 data Rchild = Rchild  deriving Text
 data Bflag = Bflagderiving Text

And now we create a class for each field name in the record (all
pretty mechanical, so I don't care too much about the time for the
implementation):

 class FieldName a Key b = KeyField a b where
key:: a - b
key d = get d Key

 class FieldName a Info b = InfoField a b where
info:: a - b
info d = get d Info

 class FieldName a Lchild b = LchildField a b where
lchild:: a - b
lchild d = get d Lchild

 class FieldName a Rchild b = RchildField a b where
rchild:: a - b
rchild d = get d Rchild

 class FieldName a Bflag b = BflagField a b where
bflag:: a - b
bflag d = get d Bflag


And the record class definition is:

 class (KeyField (c a b) a,
InfoField (c a b) b,
LchildField (c a b) (c a b),
RchildField (c a b) (c a b),
BflagField (c a b) BalFlag) = BalTree c a b where
   nulTree:: c a b
   isNulTree:: c a b - Bool
   isBalNode:: c a b - Bool

This record definition dissatisfies me somewhat.  The presence of the
boolean functions "isNulTree" and "isBalNode" are perilously close to
actual data constructors.  The "nulTree" function to provide an
uninitialized record is also rather dissatisfying.  However, I cannot
really get along without them.

And now, the type which is our implementation of the tree:

 data BalTreeRec a b = NulTree | 
   BalNode a b (BalTreeRec a b) (BalTreeRec a b) BalFlag

Before I go into the field instances, I discover I need to create a
"zero" element for some of the fields.  This corresponds to the
"uninitialized" value that John Peterson has referred to in his papers
on structures.

 class HasZero a where
   getZero:: a

And each of the field instances, which I intend to write with an Emacs
macro off an initial sample:

 instance (HasZero a, HasZero b) = FieldName (BalTreeRec a b) Key a where
 set (NulTree) Key x = BalNode x getZero NulTree NulTree getZero
 set (BalNode _ f1 f2 f3 f4) Key x = BalNode x f1 f2 f3 f4
 get (BalNode x _  _  _  _ ) Key   = x
 instance (HasZero a, HasZero b) = KeyField (BalTreeRec a b) a

 instance (HasZero a, HasZero b) = FieldName (BalTreeRec a b) Info b where
 set (NulTree) Info x = BalNode getZero x NulTree NulTree getZero
 set (BalNode f1 _ f2 f3 f4) Info x = BalNode f1 x f2 f3 f4
 get (BalNode _  x _  _  _ ) Info   = x
 instance (HasZero a, HasZero b) = InfoField (BalTreeRec a b) b

 instance (HasZero a, HasZero b) = 
   FieldName (BalTreeRec a b) Lchild (BalTreeRec a b) where
 set (NulTree) Lchild x = BalNode getZero getZero x NulTree getZero
 set (BalNode f1 f2 _ f3 f4) Lchild x = BalNode f1 f2 x f3 f4
 get (BalNode _  _  x _  _ ) Lchild   = x
 instance (HasZero a, HasZero b) = LchildField (BalTreeRec a b) 
(BalTreeRec a b)

 instance (HasZero a, HasZero b) = 
   FieldName (BalTreeRec a b) Rchild (BalTreeRec a b) where
 set (NulTree) Rchild x = BalNode getZero getZero NulTree x getZero
 set (BalNode f1 f2 f3 _ f4) Rchild x = BalNode f1 f2 f3 x f4
 get (BalNode _  _  _  x _ ) Rchild   = x
 instance (HasZero a, HasZero b) = RchildField (BalTreeRec a b)
(BalTreeRec a b)

 instance (HasZero a, HasZero b) = 
   FieldName (BalTreeRec a b) Bflag BalFlag where
 set (NulTree) Bflag x = BalNode getZero getZero NulTree NulTree x
 set (BalNode f1 f2 f3 f4 _) Bflag x = BalNode f1 f2 f3 f4 x
 get (BalNode _  _ 

Thoughts on Records and Field Names

1995-11-15 Thread David Barton


I have been thinking about records in Gofer, Haskell, and MHDL (Yet
Another Haskell Related Language) for a little while.  I know this is
a little late in the game for Haskell 1.3 (and so on), but this is the
first moment I have had to explore this a little, and I did want to
post my thoughts.

All this was started by a message from Simon Peyton-Jones on way back
on May 16, 1994 (yes, that long ago).  In it, he proposes using the
type system to construct, by a disciplined set of declarations
(possibly generated by some syntax sugar), a scheme for extensible
records.  I was very much attracted to the idea, as it seemed to
handle some of the name space problems that Nick North objected to as
far ago as November of 1993.  Very nice.

In his followup, Mark Jones suggested that we make a class for each
field in any record, and extend records by making an instance for the
class for any record type that does *not* have a field of that name.
This had some attraction; however, it also required changing all the
declarations each time you declare a new record, which is OK as a
basic language feature but not as a very cheap add-on to an existing
language.  However, the idea of a class for each field seemed to have
some interesting possibilities, so I decided to explore them.

First, I want to define some things we should be able to do with any
record and any field:

 class FieldName a f b where
   set:: a - f - b - a
   get:: a - f - b

Set does just what you think it would do: sets the value for a field
of type "b" in a record of type "a" (ignore the f for the moment).
Similarly, get does what is intuitively obvious: gets a value of type
"b" from a record of type "a".  Of course, if we are going to have get
and set routines of this generality, we need to specify the field as
well.  This is the reason for the "f" type above: that type specifies
the field.  Yes, a type for each field.  It does work out nicely when
you try it.

Let's do a record like Date (day: Int, month: Int, year: Int).  Let's
ignore the type of the implementation of the record for a moment, and
do the class definitions for the fields and the record itself.

 data Day = Day   deriving Text
 data Month = Month   deriving Text
 data Year = Year deriving Text

 class FieldName a Day b = DayField a b where
day:: a - b
day d = get d Day

 class FieldName a Month b = MonthField a b where
month:: a - b
month d = get d Month

 class FieldName a Year b = YearField a b where
year:: a - b
year d = get d Year

As you can see, I am not above using the initial case rule for types
versus function names to my advantage.  No, not at all.  Not one
little bit.

Now for the record declaration:


 class (DayField a Int, MonthField a Int, YearField a Int) = Date a

We actually have a little more freedom here if we wish.  We can give
Date more parameters, and have those parameters reflected in the
context if desired.  This was not necessary here, so I did not bother;
however, the possiblity should be evident to all.

The implementation, given a type, is so basic that the instances
virtually write themselves:


 data DateRec = DateRec Int Int Int

 instance FieldName DateRec Day Int where
  set (DateRec _ a b) Day x = DateRec x a b
  get (DateRec x _ _) Day = x

 instance FieldName DateRec Month Int where
  set (DateRec a _ b) Month x = DateRec a x b
  get (DateRec _ x _) Month = x

 instance FieldName DateRec Year Int where
  set (DateRec a b _) Year x = DateRec a b x
  get (DateRec _ _ x) Year = x

 instance Date DateRec

As with Simon's proposal, we have complete freedom over which
implementation we choose to use.  A tuple would have done just as
well.  I have chosen to use the simple field names for the type and
the function names rather than the class name (where I append a Name
suffix).  This is a matter of taste; however, I anticipate using the
names in the get and set routines more often than declaring records
with them.

If we find that we need to worry about BC as well as AC, we can extend
the date accordingly:

 data AncientMark = AncientMark  deriving Text

 class FieldName a AncientMark b = AncientMarkField a b where
   ancientMark:: a - b
   ancientMark r = get r AncientMark

 data BC_AD = BC | AD

 class (Date a, AncientMarkField a BC_AD) = AncientDate a

I omit the (fairly obvious) example of an implementation here.  Again,
the mark could have been anything that we wished, even polymorphic (if
we wish to make the record class polymorphic in that parameter as
well).

Now, we move on to features that are more unique to this approach of
defining field names as records.  We an also define record for a
person giving his name and the year of his birth.  The field name for
the year we already have defined; we simply add a field name for the
name of the person involved.

 data Name = Name  deriving Text
 class FieldName a Name b = NameField a b where
name:: a - b
  

Do functions exist?

1993-11-19 Thread David Barton


Greg Michaelson writes:

   Incidentally, my point about not bothering to evaluate functional
   programs whose final values are functions was serious. Presumably,
   people don't generally write programs that return functions as
   final values?

I suppose it depends on what you call a "function" from the point of
view of the implementation; however, given any answer that I can think
of, I disagree with this statement.  If a function is seen as a piece
of text, think of Mathematica as an example of something that produces
a function as an answer.  If a function is seen as an executable
representation, then a compiler is a trivial counter-example.  As
another, consider a circuit analyzer that produces a function from
time to voltage that represents a signal at a node in the circuit.
This might be passed to (for example) a graphical display program that
displays it as a trace on the screen.

Even if you think of a function as purely a Haskell-understandable
object, I think this is short sighted.  For example, a Haskell program
might produce a function that is stored away and used by a later
Haskell program as data.  The Mathematica example is of interest here,
as is the compiler.

Therefore, I disagree strongly with the above statement.

Dave Barton
[EMAIL PROTECTED]




Resolution of overloading

1993-11-18 Thread David Barton


John Peterson writes:

   OK - let's get operational!  

My man!  You know, I *like* formal methods and equational reasoning; I
just can't get my mind to "do the right thing" in all cases.  When
reasoning about correctness, I do OK; however, type checking and the
like inevitably find me descending in to a mass of operational, even
anthropomorphic arguments.  Ah, well.

With visions of Olivia Newton-John dancing through my head.

   Dave's choices are:

1) Build the resolution dictionary using the context of Module Two
(the point of declaration)

2) Build the resolution dictionary using the context of Module Three
(the point of use).

   While Dave is on the right track, neither of these choices really
   distinguishes the act of building a dictionary from referring to a
   dictionary. 

Accepted, and this is a useful distinction to make.  (skipping)

   With the type signature added, module Two instantiates the overloaded
   argument to trouble and the dictionary available in module Two is
   used.  Th this case trouble does NOT carry a dictionary parameter to
   be instantiated by the call in Three.

Understood; however, without the type signature, there is still the
question of the dictionary contents.  Under choice (1), the dictionary
contents reflect the instance in module two; under choice 2), that of
module three.  One interesting thing about choice (1) is that a
sufficiently smart compiler (there goes that anthropomorphic reasoning
again) might optimize the parameter away based upon an analysis that
states there is only one instance visible at the site of the
declaration of trouble.  This is, of course, impossible with choice
(2). Another implication of choice (1) is that more information must
be passed across module boundries; not just the type of the inherited
functions, but their "instance context" (if I may butcher the English
language) as well.

We are cetainly at the point that Phil feared, with the following
change: the point at which the dictionary context is used to build the
dictionary changes the meaning of the program.  I don't know if this
is any more paletable to anyone else; I have little problem with it,
and do not think of it as a disaster.  Certainly the impliciation that
adding an extra type information to the program changes its meaning
seems not to hold under choice (1); I agree that *this* would be a
disaster.  Adding a type declaration that hides another one *does*
change the meaning of a program; however, this is just an implication
of the hiding semantics in general that we must live with in a scoped
name space.  We do get other nastinesses, such as the seemingly
paradoxical type error I mentioned in my last message.

Dave Barton
[EMAIL PROTECTED]




Resolution of overloading

1993-11-17 Thread David Barton


Puzzled, once again.  I think I reason too operationally about these
things.  It's a curse brought on by being brain-damaged by Basic
programming at an early age.

John Peterson writes:

   The issue is at what point is the overloading of trouble, which would
   be typed as 
   trouble :: Problematic a = a - a
   is resolved to a concrete type.  There are two different instances supplied
   to resolve Problematic(Int): one in module Two and another in module
   Three.  

OK so far.

   If the signature 
 trouble :: Int - Int
   is placed in module Two then the assumption is that the instance in
   module Two is used.  However, if the overloading is resolved in Three,
   the other definition is used.  What is important is the definition of
   Problematic(Int) in force when the type checker encounters this
   context during type checking.  The added type signature forces this to
   happen in Two; without the signature this happens in Three.

The added type signature does indeed force type checking in Module
Two, where as this can (but need not necessarily be) put off until
module Three without it.  However, is this the only criterion for when
the overloading is resolved?  My operational brain gives me two
choices:

1) Build the resolution dictionary using the context of Module Two
(the point of declaration)

2) Build the resolution dictionary using the context of Module Three
(the point of use).

To me, this is orthogonal from when the actual type checking occurs.

Choice 1 is my preference: it allows the writer of trouble to
anticipate the meaning of his or her function using the context in
which it is created.  This is a little constraining; given this rule,
a new instance could not be written (say in module 3) and then trouble
called with a value of this new instance type.

Thus, this is at least connected to type checking (and thus not
completely orthogonal, as I stated above).  Choice 1 dictates the
seemingly paradoxical situation where a user could write an instance
for Problematic for, say, Float and immediately have the call to
trouble fail on a type error, even though the type of trouble is 
(Problematic a) = a - a.  This is a nastiness that I am prepared to
live with for the sake of controlability; it forces the new instance
to be placed in a module inherited by both Module a and Module b.

   Currently, the C-T rule ensures that only one instance for a given
   type - class pair exists and that this instance is visible wherever
   both the class and type are in scope.  

Indeed; this makes the choice trivial.

   Allowing multiple instances (as in Phil's example) presents no
   implementation problems; the issue is entirely one of choosing either
   the current semantic simplicity and rigidity or a more flexible system
   with more complex semantics.  Personally, I would choose the
   latter!

And this I both understand and agree with.

Dave Barton
[EMAIL PROTECTED]




Type Coercions

1992-09-28 Thread David Barton

Has anyone done any research on automatic insertion of type coercions
into Haskell?  This is a requirement for our MHDL language, and I am
trying to find a regular way to do it.  I think I have found one, but
would be VERY grateful if there was an existing reference.

Dave Barton
[EMAIL PROTECTED]




Arrays and general functions

1992-09-08 Thread David Barton

Ken Sailor writes:

On the other hand, general functions and arrays are typically mixed in
a program.  If the distinction between the two is limited to type
declarations, then from my perspective it becomes difficult to read
and understand programs.  The difference between functions as rules
and arrays to me is much more significant than the difference between
adding reals and adding integers.  From your perspective, maybe any
distinction gets in the way.  In practice, I have not had this
problem.

The distinction indeed gets in the way; however, this may well be a
product of the application area in which I am working (I *did* admit
my bias in all this).  If we define the behavior of an engineering
component in a microwave system by a function, the distinction between
whether that function is defined by an array (table of values) or a
rule (a function definition in the present Haskell sense) had better
be hidden!  We want very much to treat these independent of the
mechanism of defining them.

I am interested in how the lack of a distinction gets in the way of
your reading and understanding programs, however.  After all, I do
want to make sure we are not introducing problems here!  Granting that
this is a somewhat theoretical discussion, could you elaborate a bit
here?  What kind of expression would become difficult to understand if
the distinction between arrays and rule-defined functions was hidden?
I don't want to impose on your time unreasonably, but a couple of
small examples might be helpful.

Dave Barton
[EMAIL PROTECTED]