RE: [Possible Spam]RE: [Haskell] Job Posting (Looking for a few goodfunctionalprogrammers)

2005-02-04 Thread David Bergman
Yaron wrote:

> I've been following OCaml/.NET integration, and it does seem 
> potentially quite interesting, particularly in a business 
> environment like ours where all of the traders use Windows 
> machines.  Which .NET implementation did you look at, OCamIL?  Or F#? 

F#

I wish there was an H#... Mondrian seemed to be a good initiative, but it is
probably no longer supported, or?

/David

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


RE: [Haskell] Job Posting (Looking for a few good functionalprogrammers)

2005-02-03 Thread David Bergman
Yaron,

This is probably out-of-topic, but: are you, or have you considered, using
the .NET implementation of OCaml. I managed - painstakingly - to integrate
it into a toy .NET project of mine, using .NET Direct3D, and see some virtue
in that combination.

If only we Haskellers would be as lucky: both a fast implementation and an
integrated one with a Real (trademark...) environment such as .NET :-(

/David

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Yaron Minsky
> Sent: Thursday, February 03, 2005 3:28 PM
> To: S. Alexander Jacobson
> Cc: haskell@haskell.org
> Subject: Re: [Haskell] Job Posting (Looking for a few good 
> functionalprogrammers)
> 
> S. Alexander Jacobson wrote:
> 
> > Yaron, would you mind sharing the reason your firm chose OCaml over 
> > Haskell for your applications?
> 
> I started the quantitative research group, and I knew OCaml 
> very well, and didn't know Haskell except by reputation.  As 
> to the merits, it is my general impression that OCaml is 
> faster, and is all around a more pragmatic language than 
> Haskell.  That's merely an ill-informed impression, but there it is.
> 
> Yaron
> 
> > For others, I would love to organize an informal gathering of NYC 
> > Haskell programmers if there are any.  If you are 
> interested, please 
> > contact me and I'll try to make it happen.
> >
> > -Alex-
> > __
> > S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com
> >
> >
> > On Thu, 3 Feb 2005, Yaron Minsky wrote:
> >
> >> Jane Street Capital (an affiliate of Henry Capital
> >> ) is a proprietary trading 
> company located 
> >> in Manhattan. The quantitative research department is 
> responsible for 
> >> analyzing, improving, and generating trading strategies.  It's an 
> >> open and informal environment (you can wear shorts and a 
> t-shirt to 
> >> the office), and the work is technically challenging, including 
> >> systems work, machine learning, statistical analysis, parallel 
> >> processing, and anything that crosses our path that looks useful.
> >>
> >> One unusual attraction of the job is that the large 
> majority of our 
> >> programming is done in OCaml.  Pay is competitive, and we're a 
> >> reasonably small company (around 85 employees), so advancement is 
> >> pretty quick for someone who performs well.
> >>
> >> Here's what we're looking for:
> >>
> >> - Top-notch mathematical and analytic skills.  We want people who  
> >> can solve difficult technical problems, and think clearly and  
> >> mathematically about all sorts of problems.
> >>
> >> - Strong programming skills.  Pretty much all of our 
> programming is  
> >> in OCaml, so being a solid caml hacker is a big plus.  But we're  
> >> also interested in great programmers who we are convinced will be  
> >> able to pick up OCaml quickly, so anyone with a high-level of  
> >> proficiency with functional languages could be a good match.
> >>
> >> - Strong Unix/Linux skills --- We're looking for someone 
> who knows  
> >> their way around the standard unix tools, can write 
> makefiles,  shell 
> >> scripts, etc.  We use a beowulf cluster for 
> compute-intensive  jobs, 
> >> so experience programming for and administering clusters is a  big 
> >> plus.
> >>
> >> If you're interested (or have any students you think might 
> be a good
> >> match) and would be willing to relocate to New York, please send a 
> >> cover-letter and resume to:
> >>
> >> [EMAIL PROTECTED]
> >>
> >>
> >> ___
> >> Haskell mailing list
> >> Haskell@haskell.org
> >> http://www.haskell.org/mailman/listinfo/haskell
> >>
> >
> >
> >
> 
> 
> ___
> Haskell mailing list
> Haskell@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell
> 

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


RE: [Haskell] Per-type function namespaces (was: Data.Set whishes)

2004-02-27 Thread David Bergman
Andre "Ozone" wrote:

> On 27/02/2004, at 9:51 AM, David Bergman wrote:
> 
> >> So at the moment, many Haskellers will append the type name to the 
> >> function to indicate that it only works on that particular 
> data type.
> >> In this respect, Haskell is at a disadvantage vs most 
> object-oriented 
> >> languages, because in them, you can write "x.add", and the type 
> >> system will perform "object-oriented polymorphism" for you 
> and call 
> >> the correct add method, no matter if x is a FiniteMap or a Set.  
> >> Writing "addToFM fm ..." or "addToSet set ..." is surely a 
> lot more 
> >> inconvenient than writing "fm.add" or "set.add", no?
> >
> > Yes. But, you are refering to overloading, no? And, not subtype 
> > polymorphism (which is what I denote with "object-oriented 
> > polymorphism")? Just to make things clear in my mind.
> 
> Yes, what I'm referring to is essentially overloading.  I 
> called it "object-oriented polymorphism" because that's 
> typically what OO people call such a thing :).

No, "they" do not. What they call polymorphism, we call subype polymorphism
or, if we are really hard-core and/or old school, even ad-hoc polymorphism.
"They" do not even realize that overloading falls in the category of
polymorphism at all...

> (I should 
> know better to use OO terminology on a Haskell list; won't 
> happen again ...).

I think it is good that you do. We need that touch of engineering realism
sometimes ;-)

> However, it's form of overloading that 
> Haskell cannot currently handle  well with type classes -- 
> Oleg's post proves that you can do it, of course, but that's 
> a (very good) hack rather than a long-term solution.

I personally use (Haskell) classes for that overloading purpose, but in a
sense that Generic Programmers would call concept modelling.
 
> > So, you have thought of automatically, but implicitly, introduce a 
> > namespace for each data type, and then have Haskell employ Koenig 
> > Lookup, to decide which function an expression is refering to?
> 
> It's a bit like Koenig lookup in that it has the same effect, 
> although it's probably easier for the compiler to infer the 
> namespace wanted, since we write "expr.function ..." rather 
> than "function expression ...".

Whether it is prefix or postfix should not alter the complexity of the
lookup considerably.

> Writing "function expression 
> ..." would work too, but then it looks like a standard 
> function call rather than a function call associated with a 
> particular type, and I think that causes more confusion.  
> Long-time Haskell users understand that writing "foo.f" 
> means "use f in namespace foo"; changing around the language 
> semantics to mean that "f foo" now means "use f in namespace 
> foo" would make lots of people rather unhappy :).

I would want it to look as an ordinary function. The single biggest problem
with Haskell, in my extremely humble opinion, is the shared namespace for
all data type accessors, with which you probably agree. It is what irritated
me the most with Entity-Relationship Diagram, that all fields need to have
unique name globally. This in contrast to instance variables, methods and
general overloading, as often found in OO languages.
 
> > You realize, of course, that "mere" intranamespacial parameter type 
> > lookup (regular overloading) would achieve the same effect, without 
> > the
> > (implicit)
> > namespaces?
> 
> I'm not sure what you mean by "intranamespcial parameter type lookup" 
> -- can you explain?

ah, I meant regular overloading, i.e., have a function be identified not by
its name, but by its whole signature, including the arity and parameter
type(s) [yes, curry, curry...]

> >> There are a number of means by which the x in x.add can be 
> >> communicated to the actual function: it's similar to the 
> hidden 'self'
> >> or 'this'
> >> variable that's present when you invoke a method on an 
> object in OO.
> >> Perhaps x is passed to the function as its first 
> parameter, or maybe 
> >> it could be its last parameter, or even an arbitrary 
> parameter (where 
> >> the parameter it's passed as could be defined in the type 
> signature 
> >> of the function).  Perhaps 'self' or 'this' could be an implicit 
> >> parameter.
> >> Any one of them will work just fine, I think.
> >
> > Again, I think you are

RE: [Haskell] Per-type function namespaces (was: Data.Set whishes)

2004-02-26 Thread David Bergman
Mr. Ozone wrote: 

[snip]
> So at the moment, many Haskellers will append the type name to the 
> function to indicate that it only works on that particular data type.
> In this respect, Haskell is at a disadvantage vs most object-oriented 
> languages, because in them, you can write "x.add", and the type system 
> will perform "object-oriented polymorphism" for you and call the 
> correct add method, no matter if x is a FiniteMap or a Set.  Writing 
> "addToFM fm ..." or "addToSet set ..." is surely a lot more 
> inconvenient than writing "fm.add" or "set.add", no?

Yes. But, you are refering to overloading, no? And, not subtype polymorphism
(which is what I denote with "object-oriented polymorphism")? Just to make
things clear in my mind.

> The idea that I've been throwing around is to be able to define a 
> separate namespace for each type; a function can either belong in a 
> "global" (default) namespace, or belong in a particular type's 
> namespace.  So, in the above example, instead of writing "addToFM fm 
> ...", we could instead associate an 'add' function with the FiniteMap 
> type, so we could write "fm.add ..." instead.  Provided that fm's type 
> is monomorphic, it should be possible to call the 'correct' add 
> function; if we defined another 'add' function that's associated with 
> the Set type, that will only get called if the 'x' in "x.add" is of 
> type :: Set.  So, like OO languages which inherently give separate 
> namespaces to their different objects, here we give separate 
> namespaces to different
> (monomorphic) types.  In this case, if one simply writes "add" instead 
> of "x.add", the compiler throws an error, because there is no 'add' 
> function defined in the default namespace; add is only defined when a 
> programmer writes "x.add" where x :: FiniteMap or x ::
> Set[1].

This overloading by namespace is usually called either ADL
(Argument-Dependent Lookup) or Koenig Lookup (especially in C++.)

So, you have thought of automatically, but implicitly, introduce a namespace
for each data type, and then have Haskell employ Koenig Lookup, to decide
which function an expression is refering to?

You realize, of course, that "mere" intranamespacial parameter type lookup
(regular overloading) would achieve the same effect, without the (implicit)
namespaces?

The core problem in Haskell is to bypass the generics, i.e., make sure that
a certain definition is used for a certain type, or combination of types.
This can only be done by class instances, as of now, but there have been
discussions of non-class overloading.

> There are a number of means by which the x in x.add can be 
> communicated to the actual function: it's similar to the hidden 'self' 
> or 'this'
> variable that's present when you invoke a method on an object in OO.  
> Perhaps x is passed to the function as its first parameter, or maybe 
> it could be its last parameter, or even an arbitrary parameter (where 
> the parameter it's passed as could be defined in the type signature of 
> the function).  Perhaps 'self' or 'this' could be an implicit 
> parameter.
> Any one of them will work just fine, I think.

Again, I think you are confusing the runtime dispatching subtype polymorpism
from overloading. Overloading would do what you want, while the subtype
polymorphism could (still) be handled by class, and instances of classes,
the Generic Programming way.
 
/David

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: [Haskell] Per-type function namespaces (was: Data.Set whishes)

2004-02-26 Thread David Bergman
Gabriel wrote: 

> | This overloading by namespace is usually called either ADL 
> | (Argument-Dependent Lookup) or Koenig Lookup (especially in C++.)
> 
> Actually in C++, it is called "argument dependent name 
> lookup", and that is the way the C++ definition text calls 
> it. As Andy Koenig has himself pointed out, he did not invent 
> that rule.  He mentionned it when the C++ committee was 
> solving a name look-up problem posed by namespaces to 
> operator functions.  That name look-up rule was later 
> generalized to non-operator to cover the function-call syntax 
> -- which is what is most known today and referred to above. 
> 
> This ends my C++ hour on Haskell list :-)

Yeah! Get back to that dark corner where people solve real problems! ;-)

/David

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: [Haskell] Re: Implicit return values

2004-01-25 Thread David Bergman
Ben wrote:

> I wrote:

Ben, it seems that you are having a quite fruitful discussion with yourself
;-)

I will just wait here for a more conclusive form of your
backward-propagating linear parameter.

/David


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: [Haskell] Re: Use of tab characters in indentation-sensitive code

2004-01-25 Thread David Bergman
Sean wrote: 

> Joking aside, surely you intelligent people realize that the internals 
> of a file format have nothing whatsoever to do with the user interface 
> of the editing tool.  Something like this would be completely 
> transparent *if* you used the right tools.

But then you would be forced to use exactly those tools and/or that
development platform. I have programmed in languages that extend beyond
ASCII (most modern languages actually do, but that is another story.) One of
these languages is APL or, more specifically, A+.

Not a nice experience, and that is "just" an extension w.r.t. character
table used. Once you leave the sheltered environment of a properly set up
Emacs with proper fonts installed, it all looks like random junk.
 
> This just shows how deeply ingrained the ascii plain text mindset is 
> in the programming community.  I don't expect anything like this to 
> ever fly, for this reason.  You guys won't let it.  :(

We guys try and some of us have used non-alphanumeric symbols, but they do
not add much, unless one leaves the linear realm of text completely and
enters the world of diagrammatic notations.
 
> Besides, the idea would be not to use  , but rather some "indent 
> paragraph" tag.

That would hardly make it more tractable outside the sheltered "Tag Editor,"
would it?

foo bar = case bar of
Zot x -> ...
Pleb -> ...

/David

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: [Haskell] Re: Use of tab characters in indentation-sensitive code

2004-01-25 Thread David Bergman
Sebastian wrote:

> Sean L. Palmer wrote:
> 
> > Besides, the idea would be not to use  , but rather
> some "indent
> > paragraph" tag.
> 
> This is kind-of a cool idea. If I ever take a course involving writing 
> my own language I'll be sure to incorporate this idea.

This idea of an indent paragraph tag has been incorporated in various
development environments and, partly, in languages. It is called the tab
character. Environments such as Emacs can be trained to treat those tab
characters as an indentation tag. And even in word processing this tagging
character has been used quite extensively.

:-|

/David

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: set representation question

2003-11-13 Thread David Bergman
Stefan wrote: 

[snip]
> > Isn't it O(min(m,n))?  You don't have to look at all 
> elements for the 
> > intersection.  Eg:
> > 
> >   {0,1,10}
> >   {0,3,98,183,398,1038,5319,7642,9811,13893,93123}
> 
> O(f) describes the worst case of the algorithm. It is O((m,n)->m+n).
> The average cost may be lower, but it depends on the 
> distribution of the data.

Yes, and maybe these input lists make it a bit clearer:


A = {0, 5, 10}
B = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}

Kind of hard to just "jump over" the B-exclusive elements...

/David

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


[haskell] Metacatamorphism library

2003-10-17 Thread David Bergman
I have not seen any Haskell metacatamorpishm library, for banana split
constructions.

Would it not be a good start for a Template Haskell-specific library? In
fact, is there a plan to create a Template Library for Template Haskell,
somewhat homologous to the STL in C++?

/David

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: Announce: buddha 1.0 released

2003-10-17 Thread David Bergman
 
Bernie "Buddha" wrote:

> Announcing buddha version 1.0
> -
> 
>www.cs.mu.oz.au/~bjpop/buddha
> 
> A declarative debugger for Haskell 98. It is based on program 
> transformation and works with GHC 5 and 6.
> 
> buddha offers a declarative debugging algorithm and a 
> browsing mechanism.
> It is useful for finding logical errors in programs and for 
> exploring computations in a high-level manner.
> 
> The interface is designed to be simple to use, and may be an 
> ideal aid for teaching Haskell.
> 
> Version 1.0 is known to work on unix platforms, including OS 
> X, GNU/linux and freeBSD. It does not work on Windows.
> 
> buddha is released under the GPL license.

Seems like a very nice tool. Although I wonder why one needs Buddha for the
sample Tree code treated in the User's Guide. Would it not be easier to just
look for the string "the bug is here" and correct the problem?

/David

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: New Haskell and FP merchandise logos for review

2003-07-31 Thread David Bergman
I seldom post things here, but Fritz, you are a genius!

/David


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: Running out of memory in a simple monad

2002-12-16 Thread David Bergman
You are right,

After writing that e-mail I looked at a lot of cases in Hugs, and also
encountered this CAF problem. And, as I pointed out elsewhere, the "last
call optimisation" is not very interesting in the lazy evaluation
scenario...

One problem, though, is that I would like not to get rid of the CAF,
since I (presumably wrongly) assume that CAFs are implemented more
efficiently in Hugs than "normal" definitions. Am I right in this
assumption?

Thanks,

David

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED]] On Behalf Of Alastair Reid
> Sent: Monday, December 16, 2002 9:18 AM
> To: David Bergman
> Cc: 'Richard Uhtenwoldt'; [EMAIL PROTECTED]
> Subject: Re: Running out of memory in a simple monad
> 
> 
> 
> "David Bergman" <[EMAIL PROTECTED]> writes:
> > Note: In an unoptimized scenario, such as
> > with Hugs, you do indeed run out of memory in your "loop" 
> (after some 
> > 4 iterations) not having the recursion in the last call. Even 
> > loops not constructing cons cells do, such as
> 
> > loop 0 = return () 
> >   loop n = do { print n; loop (n-1) } -- print n >> loop (n-1) 
> >   main = loop 5
> 
> > (you see Hugs die after some 25000 iterations...)
> 
> I'm afraid your diagnosis is way off base here.
> 
> The problem is nothing to do with a 'last call optimization' 
> or with the do syntactic sugar and can be worked around not 
> by changing how you _write_your code (which would obviously 
> be a large burden) but by how you _call_ your code (a much 
> smaller burden).
> 
> The problem is to do with garbage collection of Constant 
> Applicable Forms (or CAFs).  CAFs are definitions like:
> 
>   nil = []
>   one = 1 :: Int
>   primes = ...
>   main = loop 5
> 
> GHC and Hugs differ in how they treat CAFs.  Hugs treats CAFs 
> as a special case and never garbage collects a CAF - the only 
> way to discard its value is to unload the module that defines 
> it.  GHC treats CAFs the same as normal definitions and 
> garbage collects them when they can no longer contribute to 
> future execution.
> 
> The difference between these behaviours can be seen when the 
> CAF grows very large as in your example.
> 
> The workaround is simple enough: add a dummy argument to the 
> CAF (so that it is not a CAF any more):
> 
>main _ = loop 5
> 
> and then specify the extra argument when invoking it:
> 
>main ()
> 
> (This is a pretty standard optimisation technique: we're 
> trading time to recompute a result for the space taken to 
> store the result.  Coming from other languages where actions 
> (i.e., monadic computations) are not first class values, this 
> is a bit surprising but, from a Haskell perspective, it is 
> completely uniform.)
> 
> 
> Note that this workaround is necessary with GHC too if you 
> have a large CAF which does not die.  For example, if you 
> wanted to benchmark your code, you might want to run it 10 
> times using:
> 
> > main1 = loop 5
> > main = sequence_ (replicate 10 main1)
> 
> Now main1 will not be garbage collected until the last time 
> it is executed.  The solution is the same as for Hugs: add a 
> dummy argument.
> 
> --
> Alastair Reid [EMAIL PROTECTED]  
> Reid Consulting (UK) Limited  
> http://www.reid-consulting-uk.ltd.uk/alastair/
> 
> ___
> Haskell mailing list
> [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
> 

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Constant space infinite itteration ... solution?

2002-12-13 Thread David Bergman
Hi, Bruce.

Just want to clarify the "last call optimisation"/"tail recursion"
terminology.

One does not "remove" a tail recursive call, one just make it O(1)
w.r.t. stack (or any other "where have we been" call frame structure...)
with the help of "tail recursion optimisation". "Last call optimisation"
is a generalization of the latter optimisation, applicable to any call,
recursive or not.

In effect, implementers seldom care whether the last call is recursive
or not, they just skip the "return address" pushing, making "last call
optimisation" operationally equivalent to "tail recursion optimisation".

Richard was (and still is) completely correct in that last call
optimisation is not a crucial element in lazy evaluation, although that
is not necessarily true when strictness analysis or explicit strictness
annotations (!) allow the implementation to skip the general thunk
creation/inspection/destruction loop and instead use the much more
time-efficient eager evaluation (i.e., mapping to the C code you
referred to...), which can be memory-inefficient, similar to your Hugs
experience.

Good luck with that Ferrari (that is the only Ferrari we Haskellers will
ever see...)

/David

Bruce wrote:
> 
> 
> Hi all,
> 
> 
> Ok, I've got the Farari out of the garage, in to gear, and even driven

> it slowly around the block, but every time I put my foot down it just 
> stalls.
> 
> I'm trying to write a non trivial gui in Haskell. At the
> moment I'm using Hugs, and rapidly coming to the conclusion 
> that I should be using something else such as GHC.
> 
> As I see it the problem is basically that of tail recursion
> removal or as David Bergman calls it "last call optimisation".
> 
> Specifically:
> 
> David Bergman wrote
> 
> > It is easy to get the feeling that the recursive call in
> > 
> > recurse = do { f x ; recurse }
> > 
> > is the last one, but this is just an illusion of the iterative 
> > layout of "do". Sometimes the monads lure us into old iterative 
> > thought patterns...
> >
> > Taking away the syntactic sugar, we end up with
> > 
> > recurse = f x >> recurse
> > 
> > This reveals the fact that there can be no last call optimization, 
> > because the last call is ">>".
> 
> In response Richard Uhtenwoldt echoed my own thoughts ...
> 
> > What do you mean by this?  Do you mean that that an implementation 
> > cannot execute arbitrarily many iterations/recursions of that last 
> > loop/definition in constant space?
> 
> And also said ...
> 
> > If that's what you mean, you are wrong.  GHC does that.
> > The IO monad would be pretty damned useless for actual
> > work if implementations did not do that!
> 
> So, my problem is that I find that my program crashes with
> "garbage collector can't collect enough memory" after about 
> 64 Million output operations.
> 
> What's really annoying is that I can write the whole thing
> *recursively* in C in a very "functional" manner, and it works just 
> fine.
> 
> As I see it the problem is that >> and >>= are functions
> and David is right about the "not the last call" problem.
> But, Richard is right that related optimisations should 
> be possible in Haskell, and if not then you can kiss the
> whole load good by and go back to system programming in C.
> 
> In particular I would claim that the definition:
> 
>  recurse = f x >>= recurse
> 
> is essentially using >>= as a proxy temporal execution handler. So in 
> the above definition the call to recurse
> *is* a "last-call", at least according to >>=.  I'm
> probably saying that in a rather unorthodox manner,
> but I hope my basic intention is clear.
> 
> In fact we can see >> in Haskell as rather kin to ; in C
> (Or perhaps it should be >>=, since ; does transfer the
> state to the subsequent computation).
> 
> So, I've loaded GHC and I'm looking to use it instead
> (I expected to eventually anyway), but does this solve
> my problem? Or have I misunderstood something here?
> 
> Also ... I've been using the graphics libs with HUGS,
> but I can't find the equivalent in GHC ... what is
> the recomended library for writing GUIs in GHC Haskell?
> And where do I get it?
> 
> Thanks in advance ...
> 
> Bruce (IIMS, Massey at Albany).
> 
> ps: I've also been looking at Fudgets, but the code seems
> a bit cranky.
> 
> 
> ___
> Haskell mailing list
> [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
> 


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Constant space infinite itteration ... solution?

2002-12-12 Thread David Bergman
Hi, Bruce.

Just want to clarify the "last call optimisation"/"tail recursion"
terminology.

One does not "remove" a tail recursive call, one just make it O(1)
w.r.t. stack (or any other "where have we been" call frame structure...)
with the help of "tail recursion optimisation". "Last call optimisation"
is a generalization of the latter optimisation, applicable to any call,
recursive or not.

In effect, implementers seldom care whether the last call is recursive
or not, they just skip the "return address" pushing, making "last call
optimisation" operationally equivalent to "tail recursion optimisation".

Richard was (and still is) completely correct in that last call
optimisation is not a crucial element in lazy evaluation, although that
is not necessarily true when strictness analysis or explicit strictness
annotations (!) allow the implementation to skip the general thunk
creation/inspection/destruction loop and instead use the much more
time-efficient eager evaluation (i.e., mapping to the C code you
referred to...), which can be memory-inefficient, similar to your Hugs
experience.

Good luck with that Ferrari (that is the only Ferrari we Haskellers will
ever see...)

/David

Bruce wrote:
> 
> 
> Hi all,
> 
> 
> Ok, I've got the Farari out of the garage, in to gear, and even driven
> it slowly around the block, but every time I put my foot down it just 
> stalls.
> 
> I'm trying to write a non trivial gui in Haskell. At the moment I'm 
> using Hugs, and rapidly coming to the conclusion that I should be 
> using something else such as GHC.
> 
> As I see it the problem is basically that of tail recursion removal or

> as David Bergman calls it "last call optimisation".
> 
> Specifically:
> 
> David Bergman wrote
> 
> > It is easy to get the feeling that the recursive call in
> > 
> > recurse = do { f x ; recurse }
> > 
> > is the last one, but this is just an illusion of the iterative
> > layout of "do". Sometimes the monads lure us into old iterative 
> > thought patterns...
> >
> > Taking away the syntactic sugar, we end up with
> > 
> > recurse = f x >> recurse
> > 
> > This reveals the fact that there can be no last call optimization,
> > because the last call is ">>".
> 
> In response Richard Uhtenwoldt echoed my own thoughts ...
> 
> > What do you mean by this?  Do you mean that that an implementation
> > cannot execute arbitrarily many iterations/recursions of that last 
> > loop/definition in constant space?
> 
> And also said ...
> 
> > If that's what you mean, you are wrong.  GHC does that.
> > The IO monad would be pretty damned useless for actual
> > work if implementations did not do that!
> 
> So, my problem is that I find that my program crashes with "garbage 
> collector can't collect enough memory" after about 64 Million output 
> operations.
> 
> What's really annoying is that I can write the whole thing
> *recursively* in C in a very "functional" manner, and it works just
> fine.
> 
> As I see it the problem is that >> and >>= are functions
> and David is right about the "not the last call" problem. But, Richard

> is right that related optimisations should be possible in Haskell, and

> if not then you can kiss the whole load good by and go back to system 
> programming in C.
> 
> In particular I would claim that the definition:
> 
>  recurse = f x >>= recurse
> 
> is essentially using >>= as a proxy temporal execution handler. So in
> the above definition the call to recurse
> *is* a "last-call", at least according to >>=.  I'm
> probably saying that in a rather unorthodox manner,
> but I hope my basic intention is clear.
> 
> In fact we can see >> in Haskell as rather kin to ; in C
> (Or perhaps it should be >>=, since ; does transfer the
> state to the subsequent computation).
> 
> So, I've loaded GHC and I'm looking to use it instead
> (I expected to eventually anyway), but does this solve
> my problem? Or have I misunderstood something here?
> 
> Also ... I've been using the graphics libs with HUGS,
> but I can't find the equivalent in GHC ... what is
> the recomended library for writing GUIs in GHC Haskell?
> And where do I get it?
> 
> Thanks in advance ...
> 
> Bruce (IIMS, Massey at Albany).
> 
> ps: I've also been looking at Fudgets, but the code seems
> a bit cranky.
> 
> 
> ___
> Haskell mailing list
> [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
> 

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: AW: slide: useful function?

2002-12-03 Thread David Bergman
Hi, Bill.

I agree 90% with you in your questioning the adequateness of trying to
incorporate design patterns in Haskell, and the actual productive use of
them in other languages as well.

But, I must defend design patterns, and Haskell, a bit...

William Lee Irwin III wrote:
> On Mon, Dec 02, 2002 at 10:37:09AM +1100, Andrew J Bromage wrote:
> > In the interest of fairness, the declarative programming community 
> > occasionally appears to have an aversion to actual engineering.  If 
> > you mention a term like "design patterns", people look down their 
> > virtual noses at you like you're trying to infect their pure 
> > theoretical world with something unsanitary. My point is 
> that nobody 
> > is immune from this kind of thinking, it just takes different forms.
> 
> I've got some issues here:
> 
> (1) Various things I'm doing as a practitioner lack theoretical
>   background (and/or content) for various problems in them.
>   e.g. how does one measure per-cpu time spent waiting on 
> io on SMP?
>   e.g. page replacement mixes kinds of memory, creating 
> list searches

What has this to do with design patterns? There is, as you know,
certainly no need for theoretical substance in order to either define or
apply design patterns. They are often quite vague and pragmatic.

Oh, I see, you switch between critizising design patterns and
declarative languages... Ok, I will try to follow.

> (2) In theory and in practice I've seen "design patterns" and a couple
>   other "popular" programming movements add up to nothing. Design
>   patterns in particular produced precisely zero useful code or
>   code design during its entire lifetime as designs on 
> the level on
>   which its principles operate are not ever getting 
> freshly redone.

I have used design patterns mainly as (1) a common language in
communication with development teams and (2) as a library of thought
patterns (adornments, visitors, wrapper etc.) for less experienced
developers (i.e., developers not having built up these patterns
"implicitly"). So, the use of design patterns is *not* mainly to create
code directly, but rather a common terminology within teams and
communities.

But, design patterns are clearly overestimated as a tool for (indirect)
code production, you are right in that.

> (3) Various tidbits of theoretically motivated languages 
> assume so much
>   infrastructure as to be unsuitable for various (kernel)
>   environments. There's a circular dependency between what some
>   things implement and the infrastructure assumed, which is where
>   the problem lies.

What infrastructure are you referring to?

> (4) There is no excuse for willful ignorance. Linear searches in
>   interrupt context and other gross inefficiencies have brought
>   systems down because the events triggering the poor algorithms
>   occurred in realtime and are generated by hardware. They
>   consistently livelocked boxen with sufficient cpu counts.

Yes, linear search is certainly bad, not only in that particular
context, but in all... We can all agree on that one.

> At any rate, the gist of all this is that even though I don't 
> use much in the way of theory now, I would like to use more 
> of it, and there are various things I wouldn't mind having 
> that aren't universally available.
> 
> A much lower-level language (i.e. one requiring far less 
> infrastructure) with a decent type system and some modularity 
> would be nice for systems programming, which is the majority 
> of my work, but it really just doesn't make a difference 
> because the work is generally concentrated on a given 
> preexisting system, which isn't going to get converted 
> between languages.

There is one such language: C (if you can call the type system
decent...)
 
> And last, but not least, programming is a mathematical 
> discipline. Even the most dreary programming tasks are 
> phraseable as such.
> 
> (1) hardware does not obey spec, but driver is needed
>   -- augment the driver's state machine to handle new error cases
> (2) ridiculous code/patch merging constraints are enforced
>   -- analyze program structure to devise incremental 
> merge strategy
> (3) make process interaction and/or scheduling decision
>   -- scheduling has lots of mathematical stuff behind it

It is also fruitful to separate "formal" and "mathematical" a bit, since
most people tend to diverge into differential geometries, categories,
Gilbert spaces etc. when "mathematics" is mentioned, but stay still when
"formal" arguments are presented. I.e., your example scenarios above
would probably be best classified as formal. Yes, I know that everything
formal is mathematical when scrutinized...

> None of this is to say that I haven't made extensive and 
> highly beneficial use of Haskell in userspace, which I have. 
> It is a full- blooded substitute for perl, python, and 
> several other "scripting" languages that supposedly ride on 

Al

RE: AW: slide: useful function?

2002-12-02 Thread David Bergman


John Hughes wrote:
> 
> > On Mon, 2 Dec 2002, Andrew J Bromage wrote:
> >
> > > ... If you mention a term like "design patterns",
> >
> > well I love design patterns, it's just that in Haskell-land 
> they are 
> > called higher-order functions, or polymorphic functions, etc.
> >
> > -- Johannes Waldmann  
> http://www.informatik.uni-leipzig.de/~joe/ 
> > --
> 
> Or maybe 
> more general notions, such as
> 
>"define a recursive datatype together with a catamorphism 
> combinator"
> 
> or even
> 
>"embed a domain-specific language via combinators over an ADT"
> 
> There are patterns of that sort in our programs, which we 
> would probably rather call design techniques, which aren't so 
> easily captured by a higher-order function definition.

It seems like all the patterns, at least the ones in the GoF's
enumeration, can be expressed as higher-order functions and classes if
we only would have a way to traverse a record structure dynamically. If
someone can think of a "design pattern" (such as one from the GoF book)
which is not expressible directly in Haskell, up to data type structure
(i.e., assuming the structure to be list or a fixed tree structure...),
please let us know.

Till then, we "Haskellers" will probably continue expressing our
patterns either directly in Haskell or using highly formal language,
with terms such as "catamorphisms".

The virtue, and weakness, of traditional design patterns is their
vagueness and informal character, making them (1) comprehensible to the
90% of the developer community not familiar with category theory but (2)
constituting only vague schemes for implementation; in fact, they are
often so vague that it takes quite some effort to determine whether a
fragment of code follows a pattern. Actually, showing a fragment of code
to a group of software enginers and having them pick the pattern
embodied will probably lead to differing opinions.

In Haskell-land, or in any other land on the Functional continent, a
catamorphism is a catamorphism: as soon as an engineer recognizes the
algorithm of a code fragment, he will immediately determine the
catamorphic (or any other morhpic) character. No fuss, no differing
opinions. Ah, this is my land...

/David

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Carried away in the monads? was: RE: Running out of memory in a simple monad

2002-12-01 Thread David Bergman
Richard wrote:

> some are using Haskell for "systems programming", as a better C
> than C.  some, including me, would like to see more of that,
> with Haskell or another pure functional language with an IO monad
> taking systems programmers away from the C and C++ communities.

That is good, I would probably call myself a C++ developer primarily,
but Haskell + the IO monad is a much better choice (even better than
Perl...) in most situations.

> Hugs is completely useless for *that*.

Yes, it is, although it is great for testing the validity of one's
Haskell programs.

> for an example of Haskell as a better C than C, see Chak's Gtk+
> bindings.  to use them you must write your whole GUI in the IO monad
> in a style where the basic data structures and control structures
> closely resemble what you would write in C.

I have seen it, and it is a bit imperative in its style.

> many Haskellers have a negative opinion of such heavy use of the
> IO monad, but in systems programming you need more control over
> when (relative to other interactions) your program performs an
> interaction with a file, network or UI resource than is available
> in Haskell without the IO monad.

I must confess that I belong to that "theoretical" group, although I am
a systems programmer (i.e., developing real systems for real customers,
paying real money).

In my opinion, the use of categorical monads in a programming language
is a brilliant intellectual achievement, combining the "pure" and "real"
in programming, at least on paper... This is where the Haskellers
divide: one group (me included) consider the hidden state in monads as
falling back to imperative thought patterns. In some cases, as in IO, we
need to have a state, obviously, but it would be beneficial if we could
divide the logic into state independent (defined outside the monads) and
state dependent code (defined within the monads; such as IO).

It is a bit like some "C++" developers I meet, who consider themselves
to develop in C++ just because they use "const" or "&" (reference):
functional programming is about choice of language and choice of
programming style.

/David

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Running out of memory in a simple monad

2002-12-01 Thread David Bergman
So,

Should I imply that the IO monad is "pretty damned useless" in Hugs
then, since the loop does not run in constant space there?

There are a lot of algorithms that cannot be run in constant space (due
to either recursion depth or structure generation), even in the most
optimized setting. This does not make them useless.

/David

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Richard Uhtenwoldt
Sent: Saturday, November 30, 2002 11:53 PM
To: David Bergman
Cc: [EMAIL PROTECTED]
Subject: RE: Running out of memory in a simple monad

David Bergman writes:

>It is easy to get the feeling that the recursive call in
>
>   recurse = do
>   f x
>   recurse
>
>is the last one, but this is just an illusion of the iterative layout
of
>"do". Sometimes the monads lure us into old iterative thought
>patterns...
>
>Taking away the syntactic sugar, we end up with
>
>   recurse = f x >> recurse
>
>This reveals the fact that there can be no last call optimization,
>because the last call is ">>".

What do you mean by this?  Do you mean that that an implementation
cannot execute arbitrarily many iterations/recursions of that last
loop/definition in constant space?

If that's what you mean, you are wrong.  GHC does that.  The IO monad
would be pretty damned useless for actual work if implementations did
not do that!

Even if we replace the (>>) with a (:) as the "main operator"/"last
call" of the loop, the result can execute in constant space because
of optimizations.

E.g., the program

  loop n=n:loop (n+1)
  main = print (loop 0)

executes in constant space when compile by GHC 5.02.


Details: 

Specifically, I let it run till it printed 2,500,000 at which time top
reported the "RSS" to be 1,500 with no increase having occurred in a
long time.  top's manpage says that "RSS" is "the total amount of
physical memory used by the task, in kilobytes".

The statement about (>>) is true when the (>>) is in the IO monad.  I
did not check to see what happens in a "user-defined" monad.
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Running out of memory in a simple monad

2002-11-30 Thread David Bergman
Richard,

I assumed that the compiler had, through strictness analysis, used a
call stack in the compiled code, instead of the usual call frames in the
heap (equivalent to "hey, I thought this was the Standard ML mail
group?") ;-)

Actually, you are right in that the "last call optimization" is not
vital in most call-by-need scenarios, since both (the implicit) call
stack and data structures are often consumed as generated, and the GC
can reclaim the thunks. Note: In an unoptimized scenario, such as with
Hugs, you do indeed run out of memory in your "loop" (after some 4
iterations) not having the recursion in the last call. Even loops not
constructing cons cells do, such as

loop 0 = return ()
loop n = do { print n; loop (n-1) } -- print n >> loop (n-1)
main = loop 5

(you see Hugs die after some 25000 iterations...)

Sorry about the over-simplification, but I wanted people to not forget
that the "do" is just syntactic sugar...

Thanks,

David

-Original Message-
From: Richard Uhtenwoldt
Sent: Saturday, November 30, 2002 11:53 PM
To: David Bergman
Cc: [EMAIL PROTECTED]
Subject: RE: Running out of memory in a simple monad

David Bergman writes:

>It is easy to get the feeling that the recursive call in
>
>   recurse = do
>   f x
>   recurse
>
>is the last one, but this is just an illusion of the iterative layout
of
>"do". Sometimes the monads lure us into old iterative thought
>patterns...
>
>Taking away the syntactic sugar, we end up with
>
>   recurse = f x >> recurse
>
>This reveals the fact that there can be no last call optimization,
>because the last call is ">>".

What do you mean by this?  Do you mean that that an implementation
cannot execute arbitrarily many iterations/recursions of that last
loop/definition in constant space?

If that's what you mean, you are wrong.  GHC does that.  The IO monad
would be pretty damned useless for actual work if implementations did
not do that!

Even if we replace the (>>) with a (:) as the "main operator"/"last
call" of the loop, the result can execute in constant space because
of optimizations.

E.g., the program

  loop n=n:loop (n+1)
  main = print (loop 0)

executes in constant space when compile by GHC 5.02.


Details: 

Specifically, I let it run till it printed 2,500,000 at which time top
reported the "RSS" to be 1,500 with no increase having occurred in a
long time.  top's manpage says that "RSS" is "the total amount of
physical memory used by the task, in kilobytes".

The statement about (>>) is true when the (>>) is in the IO monad.  I
did not check to see what happens in a "user-defined" monad.
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Running out of memory in a simple monad

2002-11-29 Thread David Bergman
Just to add to what Zdenek wrote:

The linear complexity of string concatenation in a naïve implementation
(not having access to an extra-language "end-of-list" in the "diff list"
sense...) make the total complexity O(n^2), since the number of conses
generated is thus

sum [1 .. n]

which, obviously, is (1+n)*n/2. In the case of the n=5 in the
example we get "sum [1 .. 5]" => "1250025000". So, well over one
billion conses. This is why "++" is right associative :-)

This time complexity cannot make Hugs crash, though, except for a defect
GC, having problems tidying up after each round of "++".

The space complexity, which reduces to maximum execution stack space
(considering a proper GC) in the example, is what kills Hugs. The
problem is that the string concatenation is not the last call, so there
is no room for last call optimization.

If you want to mimic the complexity of the example while calculating the
number of conses required, try evaluating the "isomorphic" expression

last $ scanl1 (+) $ take 5 (repeat 1)

It might crash in Hugs, running out of execution stack space, for the
same reason as the original example. It is the "last" that holds up the
stack space, by the way.

I hope this was helpful.

Regarding "do":

It is easy to get the feeling that the recursive call in

recurse = do
f x
recurse

is the last one, but this is just an illusion of the iterative layout of
"do". Sometimes the monads lure us into old iterative thought
patterns...

Taking away the syntactic sugar, we end up with

recurse = f x >> recurse

This reveals the fact that there can be no last call optimization,
because the last call is ">>".

Regards,

David

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: question about haskell report terminology

2002-11-28 Thread David Bergman
Bernard James wrote:

> In section 4.4.3 "Function and Pattern Bindings" of the Haskell 98
Report,
> it gives the following translation:

[ the pattern lambda construction to case expression conversion from the
Report]

> What does it mean by "semantically equivalent". A rough approximation
is
> "has the same meaning", but that depends on how you define the
"meaning".

In the context of code execution, it all boils down to one unique
meaning: the result of the execution, i.e., two code fragments
inflicting the same environmental (informally "identifier -> variable ->
value" mapping and I/O) change are considered semantically equivalent.
This is called "operational semantics".

One of the (many) advantages of Haskell, being a purely declarative
language, is that one can use a more tractable approach: constructs are
operationally equivalent if and only if they are substitutable (after
proper renaming of variables to avoid incidental name clashes), and can
even use a simple calculus in the kernel language (after the compilation
you mentioned).

> For example:
>
>   foo x = show x
>  versus
>   foo = \x -> show x

And, why not the simplest version: "foo = show"...

If we call these three versions "foo1", "foo2" and "foo3", then they are
semantically equivalent because, besides having the same type, one can
substitute one with anyone of the others; if we have the definition

g = foo1 "Hello"

we can replace "foo1" with either "foo2" or "foo3" and get the exact
same behavior (or operational semantics, if you prefer):

g = foo2 "Hello"

> I think the answer to the question is something like: this rule is
> intended
> to translate Haskell into the kernel, but it is not an equivalence
that the
> programmer may (always) use in their own program (ie it is applicable
only
> after type checking).

Semantic equivalence does not necessitate syntactical equivalence
(although the opposite implication trivially holds...), if that was your
worry. And, no, the programmers need not to worry about this equivalence
in most cases, since the transformation is done for us by our friendly
Haskell compiler/interpreter.

Actually, the Haskell compiler/interpreter does not need to do the
conversion at all, as long as the behavior is AS IF the conversion has
been done. There are often more efficient ways to do things than to
translate into the Haskell kernel language.

Semantics (almost) always ignore efficiency issues :-)

> I'm not sure where this rule fits into the
> definition of the language.

And I am not sure I understood your worries, so forgive me if my
explanation was out-of-bounds, or trivial.

Regards,

David

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: You can finally run your Chameleon programs!

2002-11-27 Thread David Bergman
Vincenzo,

I agree with your feeling of the expressive superiority of functional
programming compared to C and even C++, although I would not use the
word "hell" ;-)

Actually, there is a lot of advanced meta programming enabled in C++ due
to its rather powerful, but intricate, typing system in specializing and
instantiating templates, and the syntactic application of the "()"
operator. You should have a look at a library called Boost.MPL at
http://www.boost.org for some compile-time constructs resembling what
languages like Haskell can provide. In short, I think the expressiveness
of powerful type systems has caught on in the C++ community. So, maybe
one day the C++ developers take the next natural step and use Haskell
:-)

The reason that I bring this up is because I have implemented a Haskell
Prelude-like library for me and my development team in those "hellish"
cases where we need to express our ideas in C++, in that way promoting
the "declarative, abstract and typed" thought patterns to the regular
developer.

Regards,

David

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Nick Name
Sent: Wednesday, November 27, 2002 3:29 PM
To: [EMAIL PROTECTED]
Subject: Re: You can finally run your Chameleon programs!

On Wed, 27 Nov 2002 17:00:36 +0800 (GMT-8)
Martin Sulzmann <[EMAIL PROTECTED]> wrote:

>  However, because
>  Chameleon gets translated to Haskell, you can in theory use
>  all of the FFI and other stuff.

I know that it's a research language, but for example it would be nice
to be able to experiment with gtk2 and chameleon. 

Functional programming, and advanced type systems, stand in front of
C/C++ hell like a (well designed) GUI stands in front of the command
line: maybe you have less control, but you do things quick and
correctly, so I think it's interesting to try to apply new ideas to
small "real world" examples. And to do this you need the FFI or a rich
standard library (GHC provides both :)).

IMHO

Vincenzo

-- 
Teatri vuoti e inutili potrebbero affollarsi
se tu ti proponessi di recitare te
[CCCP]
Il contenuto di questa e-mail può essere modificato e redistribuito
purchè questa nota e il suo significato rimangano intatti.
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: "cartesian classes"

2002-11-27 Thread David Bergman
Frank,

Then we do mean the same thing, because a relation is indeed a subset of
a cartesian product ;-) That is what I meant with "cartesian classes".
Sorry for any confusion that might have caused. And, I am fully aware
that most people in this mail group are quite familiar with simple
set-theoretic concepts, but since the question was raised, and I know
that there are a few more pragmatic programmers aboard, I assumed it
would not hurt to explain that a bit.

I still think "multi-parameter classes" is bad terminology, though,
since the subconstruct "C a b" in the class declaration "class C a b
where ..." does not denote a class (template), but the "C" does... This
is in contrast to "multi-parameter types", such as "data Tree a b = Leaf
a | Branch b (Tree a b) (Tree a b)" where the subconstruct "Tree a b"
does denote a type (template).

Maybe I am the only one with this view...

Thanks for the concrete usage info for functional dependencies
restricting these "class relations".

David

-Original Message-
From: Frank Atanassow [mailto:[EMAIL PROTECTED]] 
Sent: Wednesday, November 27, 2002 10:15 AM
To: David Bergman
Subject: Re: "cartesian classes"

David Bergman wrote (on 26-11-02 13:18 -0500):
> > What do you mean by "cartesian classes"? Do you mean multi-parameter
> > type classes?
> 
> > They're supported by GHC and Hugs, but not NHC98 or HBC. The same
goes
> > for functional dependencies.
> 
> Yes, I meant what is commonly known as multi-parameter type classes,
> although that name IMHO does not make much sense, since only the
> explicitly parameterized type class should qualify as a any-parameter
> type class, with the (non-Haskell syntax) "b in C a", where "a" is the
> parameter, "C a" the resulting type class, and, finally, "b" the
(type)
> element of the class.
> 
> In a regular class declaration, "C a", I would argue that "C" is the
> class, not being single-parameter, but zero-parameter. It is sometimes
> unfortunate that the syntax of Haskell makes this look like a
> constructor...

I'm afraid I don't understand this. If it helps, the way I think of
classes is that they are predicates/relations at the type level; a
multi-parameter class gives an n-ary predicate for n>1. So, in:

  class C a b c where ...

C is a ternary predicate and

  instance C Int Char Int where ...

says that (Int, Char, Int) is a member of C regarded as a subset of a
product
of sets.

> What I meant by cartesian is that using several variables, you would
get
> (also in non-Haskell syntax) "(a, b, c) in C", i.e., the class
actually
> being a subclass of the third cartesian power of the universal
> (implicit) type class.

Yes, I know what cartesian products, categories and coordinate systems
are,
but I've never heard multi-parameter type classes referred to that way
before.

> If both GHC and Hugs support functional dependencies, I would probably
> dare to wander off into the mysterious land of Haskell 2.
> 
> A very concrete, but naïve, question: what is the syntax for defining
> functional dependencies in Hugs and GHC? Since Mark Jones' syntax was
> abstract in his paper, it is kind of hard to deduce an ASCII
equivalence
> (I have tried to figure out how to create a curly arrow from the
> keyboard ;-)

The syntax is, for example:

  class Class a b c d | a c -> b d where ...

meaning that a and c determine b and d, so if the declaration

  instance Class A B C D

is in scope, then, for all B', D' s.t. B'/=B or D'/=D, it is illegal for
there
to be a declaration of the form

  instance Class A B' C D'

also in scope.

There is an account of the syntax and semantics in the Hugs manual,
7.1.1.3:

  http://cvs.haskell.org/Hugs/pages/hugsman/exts.html

-- 
Frank

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: "cartesian classes"

2002-11-26 Thread David Bergman
Curly enough...

Thanks,

David

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Hal Daume III
Sent: Tuesday, November 26, 2002 1:40 PM
To: David Bergman
Cc: 'Haskell Mailing List'
Subject: RE: "cartesian classes"

> A very concrete, but naïve, question: what is the syntax for defining
> functional dependencies in Hugs and GHC? Since Mark Jones' syntax was
> abstract in his paper, it is kind of hard to deduce an ASCII
equivalence
> (I have tried to figure out how to create a curly arrow from the
> keyboard ;-)

I can't comment on the rest, but for this:


class Foo a b c e | a b -> c, b -> c d where ...


means "a and b together determing c, and b by itself determines c and d"

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: "cartesian classes"

2002-11-26 Thread David Bergman
(I am having problem with my lovely Outlook program, so here I send it
again, to the whole group; sorry, Frank...)

Frank Atanassow wrote:
 
> David Bergman wrote (on 26-11-02 01:29 -0500):
>> I would like to know if anyone (maybe Mark P) knows the status of 
>> "Cartesian classes" in different Haskell implementations. I.e., does 
>> anyone implement the suggested functional dependencies or the less 
>> general parameterized type classes?
>> 
>> I have need for the multi-variable classes quite often (especially in

>> a genetic algorithm framework I am building in Haskell). Although I 
>> would hesitate to extend beyond Haskell 98, if there exist a common 
>> view on how to best implement Cartesian classes (read "it will be 
>> part of Haskell 2") and/or a stable implementation for it, I am 
>> willing to be a bit adventurous...

> What do you mean by "cartesian classes"? Do you mean multi-parameter 
> type classes?

> They're supported by GHC and Hugs, but not NHC98 or HBC. The same goes

> for functional dependencies.

Yes, I meant what is commonly known as multi-parameter type classes,
although that name IMHO does not make much sense, since only the
explicitly parameterized type class should qualify as a any-parameter
type class, with the (non-Haskell syntax) "b in C a", where "a" is the
parameter, "C a" the resulting type class, and, finally, "b" the (type)
element of the class.

Side note: functional dependencies would, admittedly, produce parameters
for classes, but only implicitly so.

In a regular class declaration, "C a", I would argue that "C" is the
class, not being single-parameter, but zero-parameter. It is sometimes
unfortunate that the syntax of Haskell makes this look like a
constructor...

What I meant by cartesian is that using several variables, you would get
(also in non-Haskell syntax) "(a, b, c) in C", i.e., the class actually
being a subclass of the third cartesian power of the universal
(implicit) type class.

If both GHC and Hugs support functional dependencies, I would probably
dare to wander off into the mysterious land of Haskell 2.

A very concrete, but naïve, question: what is the syntax for defining
functional dependencies in Hugs and GHC? Since Mark Jones' syntax was
abstract in his paper, it is kind of hard to deduce an ASCII equivalence
(I have tried to figure out how to create a curly arrow from the
keyboard ;-)

Thanks,

David


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Best recursion choice for "penultimax"

2002-11-25 Thread David Bergman
Hi,

(maybe I got the message to the community this time, Mark P ;-)

I would like to know if anyone (maybe Mark P) knows the status of
"Cartesian classes" in different Haskell implementations. I.e., does
anyone implement the suggested functional dependencies or the less
general parameterized type classes?

I have need for the multi-variable classes quite often (especially in a
genetic algorithm framework I am building in Haskell). Although I would
hesitate to extend beyond Haskell 98, if there exist a common view on
how to best implement Cartesian classes (read "it will be part of
Haskell 2") and/or a stable implementation for it, I am willing to be a
bit adventurous...

/David

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Mark P Jones
Sent: Monday, November 25, 2002 1:07 AM
To: 'Dr Mark H Phillips'
Cc: 'Haskell Mailing List'; Mark P Jones
Subject: RE: Best recursion choice for "penultimax"

Hi Mark,

| I have just implemented the function "penultimax" which takes a list
| of positive integers and produces the "penultimate maximum", that is,
| the next biggest integer in the list after the maximum.  Eg:
| 
| penultimax [15,7,3,11,5] = 11

To your three implementations, let me add another two.  If you are
looking
for the smallest possible definition, consider the following:

  import List

  penultimax1 :: Ord a => [a] -> a
  penultimax1  = head . tail . sortBy (flip compare)

In other words, to find the second largest, sort (in descending order,
which is why I use "flip compare") and then extract the second element.
(You could also use "(!!1)", but I think that "head . tail" is nicer.)
Thanks to lazy evaluation, using sort in this way isn't as expensive
as you might think; because we ask only for the first two elements,
only a small part of the full sort computation will be needed.

A little more algorithmic sophistication leads to the following
alternative that can find the penultimax with only  n + log2 n
comparisons (approx), where n is the length of the list.

  penultimax :: Ord a => [a] -> (a, a)
  penultimax  = tournament . map enter
   where enter x = (x, [])

 tournament [(x, xds)] = (x, maximum xds)
 tournament others = tournament (round others)

 round ((x,xds):(y,yds):others)
   | x>=y  = (x, y:xds) : rest
   | otherwise = (y, x:yds) : rest
 where rest = round others
 round xs  = xs

The inspiration for this code is a knock-out tournament, treating
the values in the input list as teams.  To "enter" the competition,
each team is paired with the (initially) empty list of teams that it
has defeated.  In each round, we play the teams against each other in
pairs (if there are an odd number of teams, the last one gets a "by"
to the next round).  In each game, the team with the highest value
wins, and adds the opponent to its list of victories.  The tournament
concludes when only one team remains.  And here comes the clever
part:  the penultimax must be the largest entry in the victors list
of defeats because it would have won all of its games until, at some
point, being knocked out of the competition by the eventual winner.
And hence we need only scan that list for its "maximum".

[I'm afraid I don't know who invented this---I learned about it while
teaching a class on algorithms---but the rendering above in Haskell
is mine, and could be buggy!]

Neat algorithm eh?  But be careful ...

| How do I work out which is best to use?  Is there
| one clear "winner", or will they each have pros and
| cons?

Some quick tests with Hugs +s on a example list that I constructed
with 576 elements give food for thought:

  reductions cells
   my one liner  403511483
   tournament705312288
   your penultimax  1671520180
   your penultimax2  746610344
   your penultimax3  860513782

With the caveat that this is just one example (although others
I tried gave similar results), the conclusion seems to be that
my one liner is probably the winner, beating all of the others
in reductions, all but one of the others in space, and with the
simplest definition of all.  The fact that it is coded entirely
using prelude functions might also be a benefit if you use a
compile that provides fancy implementations or optimizations
for such functions.

My advice is that you should always start with the simplest
definition (i.e., the one that is easiest to code, easiest to
understand, and most easily seen to be correct).  You should not
worry about rewriting it in what you hope may be a more efficient
form unless you find later, by profiling or other means, that
its performance really is a problem.  (In which case, you'll be
able to collect some real, representative data against which
you can test and evaluate the alternatives.)  For starters,
a supposedly "improved" version might not actually b