Research Grants at UPM

2002-11-29 Thread clip-small

[Apologies for multiple copies]
[Please, post as you deemed appropriate]

The CLIP Group, Technical University of Madrid (UPM), offers 2
pre-doctoral, 4-year scholarships (research assistant level) available
within the area of programming language technology: program analysis,
transformation, and compilation.

The work, intended to lead to a Ph.D., is to be done as a member of
the CLIP Group http://www.clip.dia.fi.upm.es, within a nationally
funded research project. The scholarships are funded by the Ministry
of Science and Technology http://www.mcyt.es.

Requirements for candidates are detailed in the CLIP group's job
openings page: http://www.clip.dia.fi.upm.es/Job_Openings/jobs.html

Application deadline is December 6th, 2002 (strict).

Please, send a short CV, including previous research work and
publications, and 1 to 3 references in electronic form in reply to
this message.

The main theme of the work envisioned within the scholarships is to
advance the state of the art of automatic program manipulation
techniques. The objective is to increase the power of such techniques
and integrate them in practical tools within the Ciao multi-paradigm
programming system, including logic, functional, constraint, and
object-oriented programming. One additional objective is to target
program optimizations to the needs of the computational elements of
pervasive computing environments. One of the main lines of work
implies extending the program compilation and development tools so
that they can be used in such an environment to reduce program
resource consumption and to instrument programs to perform dynamic
control of such consumption during execution. The final aim is to
develop an environment that would be an ideal candidate for
programming ambient intelligence in mobile and pervasive computing
environments.
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: The Haskell 98 Report

2002-11-29 Thread Antti-Juhani Kaijanaho
[Resend, sorry for any duplicates you might get.]

On 20021129T102259-, Simon Peyton-Jones wrote:
> The copyright will still be (c) Simon Peyton Jones (as it has for some
> while; it has to be attached to someone or some thing),

AIUI, legally it is attached to everyone who has ever contributed
significant amounts of text to the report...

Anyway, I am very happy about the result.  As someone said, added value,
not monopolies :-)

And I will buy the book myself, and I will persuade the university
library to buy a copy or two too.  (Actually, I would have done that
anyway, but now I can do it without feeling quilty.)

-- 
Antti-Juhani Kaijanaho, LuK (BSc)* http://www.iki.fi/gaia/ * [EMAIL PROTECTED]
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Running out of memory in a simple monad

2002-11-29 Thread David Bergman
Just to add to what Zdenek wrote:

The linear complexity of string concatenation in a naïve implementation
(not having access to an extra-language "end-of-list" in the "diff list"
sense...) make the total complexity O(n^2), since the number of conses
generated is thus

sum [1 .. n]

which, obviously, is (1+n)*n/2. In the case of the n=5 in the
example we get "sum [1 .. 5]" => "1250025000". So, well over one
billion conses. This is why "++" is right associative :-)

This time complexity cannot make Hugs crash, though, except for a defect
GC, having problems tidying up after each round of "++".

The space complexity, which reduces to maximum execution stack space
(considering a proper GC) in the example, is what kills Hugs. The
problem is that the string concatenation is not the last call, so there
is no room for last call optimization.

If you want to mimic the complexity of the example while calculating the
number of conses required, try evaluating the "isomorphic" expression

last $ scanl1 (+) $ take 5 (repeat 1)

It might crash in Hugs, running out of execution stack space, for the
same reason as the original example. It is the "last" that holds up the
stack space, by the way.

I hope this was helpful.

Regarding "do":

It is easy to get the feeling that the recursive call in

recurse = do
f x
recurse

is the last one, but this is just an illusion of the iterative layout of
"do". Sometimes the monads lure us into old iterative thought
patterns...

Taking away the syntactic sugar, we end up with

recurse = f x >> recurse

This reveals the fact that there can be no last call optimization,
because the last call is ">>".

Regards,

David

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



ANNOUNCE: HaXml-1.08

2002-11-29 Thread Malcolm Wallace
HaXml-1.08
--
http://www.haskell.org/HaXml/

We announce a fresh release of HaXml, a collection of libraries and
tools for using XML from Haskell.  This is mainly a bug-fix release.

What is new in 1.08?

* A new and highly useful function, Text.XML.HaXml.Validate.partialValidate,
  does validation except for checking whether the root element type matches
  that of the DTD's root element. This is just what you need in order to
  validate any partial document or sub-document.

* The function Text.XML.HaXml.Html.Generate.htmlprint had a little
  bug which caused it to loop infinitely if some text was longer
  than 60 characters without a space.

* The Xtract parser and combinators are now included in the
  HaXml library package, rather than existing solely for the Xtract
  command-line tool.

* HaXml now works nicely with ghc-5.04.x as well as ghc-5.02.x and nhc98.

Regards,
Malcolm
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: nub

2002-11-29 Thread Jan-Willem Maessen
I find that nub is nearly always the wrong tool for the job (unless
the job is trivial quickie coding).  I'll point out that:

> map head . group . sort

has O(n lg n) asymptotic complexity, whereas nub or (sort . nub) both
are O(n^2).  This fact seems all too frequently forgotten.  For short
lists it doesn't matter much---but I find I like to use set union a
lot, and "merge" is quite a bit better than List.union for this, so I
tend to keep even short sets sorted.  (For bigger sets, of course, one
should import a nice set implementation.)

Richard Braakman <[EMAIL PROTECTED]> writes:
> uniq = map head . group

and later:
> I think both functions are useful.  If I understand it right, uniq
> can evaluate its argument list lazily, while nub cannot.  There's no
> real need to put uniq in the standard library, though.

Here's a little experiment for folks with big Haskell programs:

Look for uses of "group" in your code.  I suggest:
  find -name '*.hs' | xargs grep -n '[^a-zA-Z]group[^a-zA-Z]' 

Now, look at two divisions of these calls:
1) What proportion are *not* performed in conjunction with a call to
   sort?

2) What proportion are *not* preceded by "map head"?

In the case of the Eager Haskell compiler, the answer to (1) was
"none", and the answer to (2) was "several".

I suspect I'm not uniq in this finding.  Providing "uniq" and/or
"uniqSort" as prelude functions could in fact be a great service.  

Of course, if I only got to add one function to List, it would be
some variant of "merge".  

-Jan-Willem Maessen
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: The Haskell 98 Report

2002-11-29 Thread Claus Reinke
Hmm, I remained relatively quiet throughout the discussion, as I didn't
expect to buy the book version, and my worries about the online version
were being addressed by others, but as a Haskell user and (occasional)
paper author, I would like to register that CUP's handling of copyrights
here is definitely not going unnoticed.

>From an author's perspective, I'd love to see more publishers approaching
the copyright question on an added-value basis (where a non-exclusive
copyright is sufficient because the item will be published timely, in good
quality and for a reasonable price - so people will want to buy it, not
because they can't get it elsewhere, but because they like what they get!).

So, as a small token, I've revised my original plan and will now buy one
of the printed versions (I shall also place higher priority on submitting
to JFP in the future;-). Let's support forward-looking publishers!

Thanks, Simon, and thanks, Conrad Guettler & CUP!
Claus

- Original Message -
From: "Simon Peyton-Jones" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: "Simon Peyton-Jones" <[EMAIL PROTECTED]>; "Conrad Guettler (Conrad Guettler 
(CUP))"
<[EMAIL PROTECTED]>
Sent: Friday, November 29, 2002 10:22 AM
Subject: The Haskell 98 Report


Folks,

As you know, Cambridge University Press are doing us the huge service of publishing 
the Haskell 98
report, both as a special issue of the Journal of Functional Programming (Jan 2003) 
and as a
hardback book (it'll cost around £35).

I'm very, very, very happy to say that, following discussion with CUP, the copyright 
and
reproduction arrangements for the report will remain unchanged; i.e. exactly as they 
are at:
http://research.microsoft.com/~simonpj/haskell98-revised/haskell98-report-html

I'm signing a letter that grants CUP a *non-exclusive* license to publish the Report, 
but it places
no limitations on what else may be done with it.  The copyright will still be (c) 
Simon Peyton Jones
(as it has for some while; it has to be attached to someone or some thing), and the 
existing notice
that says "you can do what you like with this Report" will stay unchanged.  No 
"non-commercial only"
caveats.

In my view this is extremely generous of CUP, and I am particularly grateful to Conrad 
Guettler for
making it happen.  As a thank-you to CUP, perhaps you can all go out and buy a copy!

Simon
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



The Haskell 98 Report

2002-11-29 Thread Simon Peyton-Jones
Folks,

As you know, Cambridge University Press are doing us the huge service of publishing 
the Haskell 98 report, both as a special issue of the Journal of Functional 
Programming (Jan 2003) and as a hardback book (it'll cost around £35).

I'm very, very, very happy to say that, following discussion with CUP, the copyright 
and reproduction arrangements for the report will remain unchanged; i.e. exactly as 
they are at:
http://research.microsoft.com/~simonpj/haskell98-revised/haskell98-report-html

I'm signing a letter that grants CUP a *non-exclusive* license to publish the Report, 
but it places no limitations on what else may be done with it.  The copyright will 
still be (c) Simon Peyton Jones (as it has for some while; it has to be attached to 
someone or some thing), and the existing notice that says "you can do what you like 
with this Report" will stay unchanged.  No "non-commercial only" caveats.

In my view this is extremely generous of CUP, and I am particularly grateful to Conrad 
Guettler for making it happen.  As a thank-you to CUP, perhaps you can all go out and 
buy a copy!

Simon
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Running out of memory in a simple monad

2002-11-29 Thread Zdenek Dvorak
Hello,

I hope I understand what's going on; if not please someone correct me.


I have problems with monads and memory. I have a monad through which
I thread output. If I do the concatenation of the output-strings in
one way Hugs runs out of memory, but if I do it in another way
everything works. I can't see why the first way doesn't work but the
second is OK. I woudl appreciate if someone could tell me what I am
doing wrong.
  Here is the non-working monad:   -}


The problem is not directly connected to monads; what is the problem:

[] ++ x = x
(h:t) ++ x = h : (t++x),

i.e. time complexity of ++ is proportional to the length of first list.

first way:


putCharM c  = M $ \o  -> ((), o ++ [c]) -- Is this stupid in some way?


this takes list (looong) of everything produced before this putCharM and
concatenates c as last member; this takes time linear in the length of
the list, summing over all putCharMs it is quadratic (and of course,
due to laziness a lot of memory is consumed; seq does not help, as it only
evaluates first cell of the list so that it sees it is not empty; deepSeq
would solve this, but the time consumption would still stay long).

the second way:


  M f >>= k  = M $
let (x, o)   = f
M f2 = k x
(x', o') = f2
in (x', o ++ o')


this is done reverse way (because >>= is bracketed to the right); we
concatenate output of f (short) with output of f2 (the rest of computation,
i.e. looong); but the time is proportional to the length of first list,
so it is constant in our case; summing it over all putCharMs, we get linear
time and we are happy :-)

If you want to do it the first way, define

putCharM c  = M $ \o  -> ((), c : o)

and then reverse the list in runM.

Zdenek Dvorak

_
Add photos to your messages with MSN 8. Get 2 months FREE*. 
http://join.msn.com/?page=features/featuredemail

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: arrows

2002-11-29 Thread Ashley Yakeley
At 2002-06-29 14:43, Wolfgang Jeltsch wrote:

>To simplify things a bit, let's take a simpler Parser type which doesn't
>use monad state transformers but simple state transformers instead. This
>makes the type ParserState simpler. You can think of a parser state as
>an infinite list of substate-token pairs now. The tokens denote the
>input stream. The substates may be used to encapsulate parser states of
>a subordinate parsing process. When a token is read, the corresponding
>token-substate pair is discarded.

Sounds very complicated. Wouldn't a simple stream transformer work?

  newtype MyParser baseMonad token output = 
   MkMyParser (baseMonad token -> baseMonad output);

>(>>=):sequencing of parsers
>return:   returning values
>fail: generation of always failing parsers
>mzero:always failing parser
>mplus:parsing using alternative parsers
>fmap: applying a function to the result of a parser
>(>>>):providing high-level tokens and parsing using these tokens
>identity: reading a single token
>pure: reading a single token and applying a function to it

I think I can do all of those for MyParser. There's a one-token 
look-ahead variant:

  newtype MyLookaheadParser m t a = 
   MkMyLookaheadParser (m t -> Maybe t -> m (Maybe t,a));

>(I don't allow
>finite lists here to make some things a bit easier an more elegant. You
>can easily produce an infinite list from a finite one by applying
>something like (++ repeat Nothing) . map Just to the finite list.)

Wouldn't it be easier to allow finite token lists and have reading past 
the last token be the same as mzero?

-- 
Ashley Yakeley, Seattle WA

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: nub

2002-11-29 Thread Ketil Z. Malde
Richard Braakman <[EMAIL PROTECTED]> writes:

> On Thu, Nov 28, 2002 at 10:21:53PM +, Alistair Bayley wrote:
>> Wouldn't this have been better called "unique"? (analogous to the Unix
>> program "uniq"). I was looking for a "unique" in the GHC Data.List library a
>> few days ago, and didn't see one, so I wrote my own (not realising that was
>> what "nub" did).

BTDT.

> No, Unix uniq makes only a single pass.
> uniq = map head . group

> Hmm, but with that said, I don't think I disagree with you.  Renaming
> "nub" to "unique" makes it clear that it is similar, but not identical
> to what Unix "uniq" does.

And of course the traditional Unix idiom for "nub" is 'sort | uniq'
(or (uniq . sort) if you prefer)

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: You can finally run your Chameleon programs!

2002-11-29 Thread Nick Name
On 27 Nov 2002 23:22:31 +
Alastair Reid <[EMAIL PROTECTED]> wrote:

> 
>  I think you've been spoilt by the availability of 4 good compilers,
>  lots of libraries, an active research community, etc. for the Haskell
>  "research language".  

I don't know what "to spoil"  means in this contests but I'm sure it's
sort of a children having too much toys :)

And that's true, haskell is one of the few production-ready research
languages (or almost production-ready, I am not a software engineer). 
What I said is that it could be interesting to use gtk-hs and similar
with chameleon, and asked if it is possible.

I think that it can be interesting, because there is no production
program that does not crash, and there is no production program without
milions of lines of code; so I hope that starting to use new paradigms,
and at least cleaner languages, will be a good start for a more correct
operating environment, in an epoch where we are really starting to put
our lives in the hands of a machine.

Hope I have been clear, as usual I am not so good at english.

Vincenzo
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: You can finally run your Chameleon programs!

2002-11-29 Thread Nick Name
On Wed, 27 Nov 2002 15:46:56 -0500
"David Bergman" <[EMAIL PROTECTED]> wrote:

> Vincenzo,
> 
> I agree with your feeling of the expressive superiority of functional
> programming compared to C and even C++, although I would not use the
> word "hell" ;-)

Just because you are not using wxwin and PREPROCESSOR BASED MESSAGE
PASSING! :PPP

I hope there are no wxwin developers on this list, but if there are,
really, please understand me :) Note that I consider wxwin itself really
good (it's worth using it if you get a program that works on macos,
windows and linux with 0.1 effort), but there are things in C++ that are
not so good at all.

besides, OO programming brings too much syntax and too few semantics in
my opinion (consider the syntactical mess of deriving a class just to
get a different behaviour in response to an event, compared to an
expression of type "IO a"), but I am compelled to use C++ this time so I
call that "hell".

> The reason that I bring this up is because I have implemented a
> Haskell Prelude-like library for me and my development team in those
> "hellish" cases where we need to express our ideas in C++, in that way
> promoting the "declarative, abstract and typed" thought patterns to
> the regular developer.

Thanks, maybe my real troubles come from the fact that I am now used to
haskell and ocaml, and not from C++ in itself.

Vincenzo
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell