Re: Haskell 2 -- Dependent types?

1999-02-24 Thread Carl R. Witty

Fergus Henderson <[EMAIL PROTECTED]> writes:

> On 22-Feb-1999, Nick Kallen <[EMAIL PROTECTED]> wrote:
> > If this is true, then what I'm doing is horrible. But I don't see how this
> > leads to nondeterminism or broken referential transparency. min2 returns the
> > same value for the same list, but it's simply more efficient if we happen to
> > know some more information about the list.
> 
> In this particular case that happens to be so.  But it's not true in
> general.  What if the body of min2 were defined so that it returned
> something different in the two cases?  Your code has no proof that the
> code for the two cases is equivalent.  If it's not, then the behaviour
> would depend on whether the compiler could deduce that a particular
> argument had type Sorted or not.  And that in turn could depend on the
> amount of inlining etc. that the compiler does.

I would assume that any language with this feature would have a
precise specification of type inference sufficient to determine which
of the two cases was used.  Without such a specification, portable
programming with dependent types is impossible; programs would compile
under one "smart" compiler (which does more inference) and fail with a
type error on another.

Carl Witty
[EMAIL PROTECTED]





Re: Haskell 2 -- Dependent types?

1999-02-24 Thread Carl R. Witty

[EMAIL PROTECTED] writes:

> [EMAIL PROTECTED] writes:
> 
> > enabling types to express all properties you want is, IMO, the right way.
> 
> Why do I feel that there must be another approach to programming?
> 
> How many people do you expect to program in Haskell once you are done adding all
> it takes to "express all imaginable properties through types"? What kind of 
> baroque monster will it be? Is type really _the_ medium for everything?

I would certainly love to program in a language that let me specify
that a sorting function really did sort.  Also, I am confident that if
done right, a dependent type system could be added on to Haskell such
that all existing Haskell programs would continue to be valid.

I see a couple of reasons to "enable types to express all properties
you want".

1) I've been following efforts in the theorem proving/proof
verification community which are based on this idea.  Several
type-theory based verification systems are based directly on
expressing properties in types (e.g. Coq, LEGO, the Alf* family,
NuPRL).  Others (PVS and Ontic) gain a lot of mileage out of a very
expressive type system.  (My apologies for any relevant systems I've
left out here.)  Based on these efforts, it seems very natural to me
to extend this idea to a real, usable programming language.

2) Type checking has been widely studied and is pretty well
understood.  It makes sense to take this base and use it to make
languages even more powerful and expressive.

Carl Witty
[EMAIL PROTECTED]





Re: Modifying the monomorphism restriction

1999-02-24 Thread Alex Ferguson


Thomas Hallgren:
> The monomorphism restriction makes sure that certain values are computed at 
most
> once by restricting them to be used at only one type. Couldn't the same be
> achieved by
> 
>* getting rid the monomorphism restriction, i.e., let all definitions to be
>  overloaded by default,
>* add the language implementation requirement that overloaded values should 
be
>  computed at most once for each instance they are used at.

I think this is definitely feasible, but I wonder if it's entirely
prudent.  Could definitely lead to a quantity of code-bloat.  One
of the annoyances of the MR is that it _disallows_ this as a solution,
even if the compiler were in a position to determine that it was
pragmatically sensible.  I can imagine cases where this remedy is
either basically unnecessary (little shared work inside the unapplied
CAF), or worse than the disease (blowup in the number of instances),
so it seems excessive to require it, either.

Slan libh,
Alex.






Re: Modifying the monomorphism restriction

1999-02-24 Thread Thomas Hallgren

John Hughes wrote:

> Some suggest that it is enough for compilers to issue a warning when using
> call-by-name. I disagree strongly.

I agree with Johns objection to the compiler warning solution, so here is another
suggestion:

The monomorphism restriction makes sure that certain values are computed at most
once by restricting them to be used at only one type. Couldn't the same be
achieved by

   * getting rid the monomorphism restriction, i.e., let all definitions to be
 overloaded by default,
   * add the language implementation requirement that overloaded values should be
 computed at most once for each instance they are used at.

Advantages of this solution:

   * It solves the problem. Since all definitions can be overloaded, definitions
 that today need the eta expansion fix, or the type signature fix, will work
 without the fix.
   * We have semantic backwards compatibility. For those programs that depend on
 the monomorphism restriction for efficiency, this solution should give the
 same efficiency.
   * We have syntactic backwards compatibility. No syntactic change is needed.
   * We get more consistent uses of type signatures. Type signatures are only
 used to restrict types, not to make them more general. (This means for
 example that a type signature that was added for documentation purposes can
 always be commented out if types change a lot during program development...)

One question then is how feasible this is to implement. But isn't this what you
get in implementations that resolve overloading at compile time rather than by
passing dictionaries at run time? Hasn't this been tried already (In GHC? In
Hugs?) and found to be feasible? (The reason it might not be feasible is that you
can get a code explosion, possibly an infinite one.)

Have I missed something fundamental that prevents this solution from working?

--
Thomas Hallgren






Re: Modifying the monomorphism restriction

1999-02-24 Thread Alex Ferguson


Joe English:
> (Am I the only one who's never been bitten by the MR restriction?)

If one always uses type sigs, or never/rarely uses compositional/
combinator style function definitions, it's much less likely to
crop up.


> How about leaving the 'a = b' binding form as it is,
> (monomorphism restriction and all) and using 'a = ~ b'
> to indicate call-by-name.  '~' is already a reserved
> symbol, and it can't currently appear on the RHS of
> a binding, so this new syntax would't break any existing
> programs.

I like that much less (though I admit I wasn't all that smitten by John
exact notation, either).  I like it less because it looks (even) worse,
IM(humble-or-otherwise)O, and more to the point because I consider it
(still) to be the wrong 'default'.  The symbol ':=' in John proposal can
be explained as 'monotypically bind a CAF', whereas Joe's notation
preserves the existing, ugly distinction between simple pattern bindings
and function pattern bindings.  That is, in other words, that
definitions don't eta-convert.


> This way call-by-need remains the default, and compilers
> will still signal an error if the programmer accidentally
> bumps into the MR.

This way bizarre type errors remain the default, and compilers
will signal an error _somewhere else_ in the program when you
bump into the MR, if my experience is anything to go by.


> For people reading the code,
> a ~ on the RHS of a binding is a signal that something
> out-of-the-ordinary is going on operationally, the same as
> when it appears on the LHS.

I dispute that there's anything 'operationally out-of-the-ordinary'
about an overloaded function definition, which is almost invariably
what the MR is (silently) complaining at me for doing when I fall
over it.  It's only out-of-the-ordinary if you were depending on
it being operationally a CAF, so having some sort of special CAF
syntax seems reasonable to me.  Certainly I will happily stipulate
that ':=' may not be the definitive word on the best such syntax
(give us back our 11 days -- err, I mean, our constructor operator
symbol namespace!).

Slan libh,
Alex.






Re: Modifying the monomorphism restriction

1999-02-24 Thread Christian Sievers

John Hughes wrote:

> Everybody agrees the monomorphism restriction is a pain:

Hmm well, it's really not a nice thing.

> Some suggest that it is enough for compilers to issue a warning when using
> call-by-name. I disagree strongly. Such a warning may alert the programmer
> at the time the overloaded definition is compiled. But programmers need to
> understand programs at other times also. The person reading through the code
> of a library, for example, trying to understand why a program using that
> library is so slow or uses so much memory, will not be helped by warnings
> issued when the library was compiled. The distinction between call-by-need
> and call-by-name is vital for understanding programs operationally, and it
> should be visible in the source.

In a library I'd really expect to see a big comment when such a thing
happens. 

> So, let's make it visible, in the simplest possible way. Let there be TWO
> forms of binding: x = e, and x := e (say). A binding of the form `x = e' is
> interpreted using call-by-name, and may of course be overloaded: it makes `x'
> and `e' exactly equivalent. A binding of the form `x := e' is interpreted
> using call-by-need, and is monomorphic; `x' behaves as though it were
> lambda-bound. Now, for example,
> 
>   pi = 4*arcsin 1
> 
> is an overloaded definition which (visibly) risks duplicated computation,
> while
> 
>   pi := 4*arcsin 1
> 
> is a monomorphic definition at a particular instance which (visibly) does not.

But which instance? In this case the default mechanism can give the
answer, but in general, you would have to give a type unless `e'
already has a monotype. So you could use `x:=e' without a signature
exactly when you now could use `x=e' without one. 


> Advantages of this idea over the existing MR:
> 
> * Monomorphism is decoupled from the syntactic form of the definition. There
>   is no need to `eta-convert' definitions to get them into a form that the MR
>   does not apply to.

The difference between `x=e' and `x:=e' is surely a syntactic one,
though arguably one that is easier to justify.

> * Monomorphism is decoupled from overloading. With this proposal, x := e is
>   always a monomorphic definition, whether the type of e is
>   overloaded or not.

Again: how can this be?

>   Thus small changes to e cannot suddenly bring the MR into effect, perhaps
>   invalidating many uses of x.
> 
> * Monomorphism is decoupled from type inference. One may leave the type of 
>   a variable to be inferred regardless of whether it is bound by name or by
>   need.
> 
> Disadvantages:
> 
> * Requires another reserved symbol.
>
> * When converting Haskell 1.x to Haskell 2, many := would need to be inserted.
>   Failure to do so could make programs much less efficient. An (optional)
>   compiler warning could help here.

I don't see this. Or do you want to always recalculate any value
defined with `=' instead of `:=' ?
 
> An alternative design would be to use := to indicate polymorphism/overloading
> in pattern bindings, but retain = for polymorphic function bindings. That
> would make conversion from Haskell 1 to Haskell 2 simpler (one would simply
> replace = by := in pattern bindings with an overloaded type signature), but is
> an unattractively inconsistent design.


I don't like this idea (yet?), and would prefer the compiler-warning
version, or even keep the MR - we could make our editors smarter and
let them add the types if they change too often or are just too weird
for us, rather than introduce new syntax only in order to be able to
leave them out.


Christian Sievers





Re: Modifying the monomorphism restriction

1999-02-24 Thread Alex Ferguson


Christian Sievers replies to John Hughes:
> > Some suggest that it is enough for compilers to issue a warning when using
> > call-by-name. I disagree strongly. Such a warning may alert the programmer
> > at the time the overloaded definition is compiled. But programmers need to
> > understand programs at other times also. The person reading through the code
> > of a library, for example, trying to understand why a program using that
> > library is so slow or uses so much memory, will not be helped by warnings
> > issued when the library was compiled. The distinction between call-by-need
> > and call-by-name is vital for understanding programs operationally, and it
> > should be visible in the source.

> In a library I'd really expect to see a big comment when such a thing
> happens. 

I think John's point is that nothing _forces_ the library writer to
do this, whereas the MR does -- in a rather crude way, IMO.  Whilst
I'm not 100% convinced, I could accept some sort of such compulsion,
just so long as it's possible to say either of the two possible "DWIM"s
reasonably concisely, and I can anticipate and trap when I'm going to
be bitten on the bum by the MR more readily than at present.


> > pi := 4*arcsin 1
> > 
> > is a monomorphic definition at a particular instance which (visibly) does 
not.

> But which instance? In this case the default mechanism can give the
> answer, but in general, you would have to give a type unless `e'
> already has a monotype. So you could use `x:=e' without a signature
> exactly when you now could use `x=e' without one. 

That's the point, isn't it?


> > * Monomorphism is decoupled from overloading. With this proposal, x := e is
> >   always a monomorphic definition, whether the type of e is
> >   overloaded or not.

> Again: how can this be?

Because the MR is used on such definitions.  Hence, as with simple
bindings with no types declarations in Haskell 1.x (&98), they're
forced to be monomorphic, on pain of a really confusing type error
someplace else in your program.  OK, I'm being facetious -- it should
be possible to reverse-chain type errors to determine whether
application of the MR is a possible cause, but that's working our
compiler writers pretty hard for a rather small 'nut'.

Would an alternative be to require that ':='-definitions have some
(or all) type information added in explictly?  One might require
that they have explict signature, but all that's really required
is to in some way specify to which monotype(s) on is specialising.


> > * When converting Haskell 1.x to Haskell 2, many := would need to be 
inserted.
> >   Failure to do so could make programs much less efficient. An (optional)
> >   compiler warning could help here.

> I don't see this. Or do you want to always recalculate any value
> defined with `=' instead of `:=' ?

The '=' definitions would be fully polymorphic, and hence there's
the possibility that they might be more overloaded than the programmer
realises, and in a way that the compiler may not field as well as
with a monomorphic definition.  (Or at all, really.)  But not
making them _illegal_ by coup de main leaves open the possibility
the the hideous inefficiency ain't so hideous after all (my usual
experience, frankly), or that an aggressive compiler can Make It Go
Away.

Slan libh,
Alex.






Monomorphism

1999-02-24 Thread John C. Peterson

You can't nuke monomorphism without addressing the ambiguity problem.
At the very least, you need scoped type variables to disambiguate
types in the absence of the MR.  This ambiguity is a definite pitfall
and the type errors resulting from this ambiguity will probably be
even more puzzling and harder to address than the errors generated by
the current MR, even though quite a bit less frequent I imagine.

  John





Re: Modifying the monomorphism restriction

1999-02-24 Thread S. Alexander Jacobson

Why not allow the code-bloat and treat type information as a hint by
which compilers/interpreters _may_ optimize?
i.e. when an expresion like 

> foo=goo 

violates the monomorphism restriction allow overloading (and perhaps code 
bloat), but if foo is explicitly typed,

> foo::Num a=> a->b

then treat the assignment like the proposed := operator would (and
eliminate the code bloat).

A good compiler would flag(warn of) assignments that violate MR and give the
programmer the option to optimize them by putting in the appropriate type 
information whenever they bother to do so.

The advantage of this proposal is that it is consistent with rapid
prototyping followed by optimizations/refinement.

-Alex-

___
S. Alexander Jacobson   Shop.Com
1-212-697-0184 voiceThe Easiest Way To Shop



On Wed, 24 Feb 1999, Alex Ferguson wrote:

> 
> Thomas Hallgren:
> > The monomorphism restriction makes sure that certain values are computed at 
> most
> > once by restricting them to be used at only one type. Couldn't the same be
> > achieved by
> > 
> >* getting rid the monomorphism restriction, i.e., let all definitions to be
> >  overloaded by default,
> >* add the language implementation requirement that overloaded values should 
> be
> >  computed at most once for each instance they are used at.
> 
> I think this is definitely feasible, but I wonder if it's entirely
> prudent.  Could definitely lead to a quantity of code-bloat.  One
> of the annoyances of the MR is that it _disallows_ this as a solution,
> even if the compiler were in a position to determine that it was
> pragmatically sensible.  I can imagine cases where this remedy is
> either basically unnecessary (little shared work inside the unapplied
> CAF), or worse than the disease (blowup in the number of instances),
> so it seems excessive to require it, either.
> 
> Slan libh,
> Alex.
> 







RE: Modifying the monomorphism restriction

1999-02-24 Thread R.S. Nikhil

> -Original Message-
> From: Joe English [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, February 24, 1999 2:36 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Modifying the monomorphism restriction
> 
> This is a good idea, except for the use of ':='.
> I'd hate to lose that symbol from the programmers' namespace
> just to solve the MR problem.

I second this plea not to take away ":=".

Nikhil





Re: Modifying the monomorphism restriction

1999-02-24 Thread Joe English


I wrote:

> Operationally I expect that in "let x = f y in ... x ... x",
> 'f y' is only evaluated once, no matter what type it is.

Which, of course, is not how Haskell actually works,
if x :: (SomeClass a) => SomeType a.  DOH!

Please disregard my earlier remarks...


--Joe English

  [EMAIL PROTECTED]





Re: Modifying the monomorphism restriction

1999-02-24 Thread Joe English


Alex Ferguson <[EMAIL PROTECTED]> wrote:
> Joe English:
> > How about leaving the 'a = b' binding form as it is,
> > (monomorphism restriction and all) and using 'a = ~ b'
> > to indicate call-by-name. [...]

> I like that much less [...]  because I consider it
> (still) to be the wrong 'default'.

> > For people reading the code,
> > a ~ on the RHS of a binding is a signal that something
> > out-of-the-ordinary is going on operationally, the same as
> > when it appears on the LHS.
>
> I dispute that there's anything 'operationally out-of-the-ordinary'
> about an overloaded function definition, which is almost invariably
> what the MR is (silently) complaining at me for doing when I fall
> over it.  It's only out-of-the-ordinary if you were depending on
> it being operationally a CAF, so having some sort of special CAF
> syntax seems reasonable to me.

I was thinking of the example from the Haskell Report:

let { len = genericLength xs } in (len, len)

which, without the MR, computes 'len' twice.
Operationally I expect that in "let x = f y in ... x ... x",
'f y' is only evaluated once, no matter what type it is.

Also John Hughes' warning that:

> * When converting Haskell 1.x to Haskell 2, many := would need to be inserted.
>   Failure to do so could make programs much less efficient.

That's why I'd prefer to keep call-by-need the default
and use new syntax for call-by-name/overloading.


> This way bizarre type errors remain the default, and compilers
> will signal an error _somewhere else_ in the program when you
> bump into the MR, if my experience is anything to go by.

Could you provide an example?  Every instance of the MR
I've been able to come up with winds up giving an error message
right at the definition that would need to have a ~ or an
explicit type signature added.


--Joe English

  [EMAIL PROTECTED]





International Masters Programme in Computational Logic

1999-02-24 Thread CL Advertisement

International Masters Programme in COMPUTATIONAL LOGIC

The Dresden University of Technology is offering a two-year study
programme, in English, leading to a master of science in computational
logic (together with a German "Diplom in Informatik").  The programme is
sponsored by the European Network of Excellence in Computational Logic
(COMPULOG Net) and other German institutions.

Past and present teachers include:

   Oskar BartensteinJim Lipton
 Maria Paola Bonacina Faron Moller
 Anatoli Degtyarev  Michael Posegga
  Enrico Franconi Horst Reichel
 Alessio GuglielmiJoerg Siekmann
 Steffen HoelldoblerMichael Thielscher
  Dieter Hutter   Heiko Vogler
 Michael KohlhaseAndrei Voronkov
 Giorgio Levi Christoph Weidenbach

Courses focus on logic and constraint programming, artificial
intelligence, type theory, model theory, proof theory, equational
reasoning, databases, natural language processing, planning and formal
methods, among others.

The tuition fees are waived.  At the end of the programme a research
master thesis has to be discussed.

Prerequisites are a good knowledge of the basics of logic, and
familiarity with mathematical reasoning.  Knowledge of foundations of
artificial intelligence and logic programming are desirable.  It is
indispensable being fluent in English; German is not necessary at all,
but there are facilities for studying it if desired.  A bachelor in
Computer Science, or equivalent degree, is required by the beginning of
courses, in October '99.

Dresden, on the river Elbe, is one of the most important art cities of
Germany.  The economy is growing rapidly, Siemens, Motorola and AMD are
building the most modern chip factory of Europe and possibilities of
getting a job after the master are very high, in the whole Germany.  The
University is very well equipped and the teachers/students ratio is
close to 1.  International contacts make it easy for interested students
to continue pursuing a career in research.

Deadline for applications is 30/06/99, but applications are processed as
they come.  To apply, just send an e-mail with your curriculum vitae to
.  Further information is on the web
at .  Paper information
material is available on request.

Please give this message maximum diffusion.

   Maja von Wedelstedt, secretary
  Artificial Intelligence Institute, Department of Computer Science
  Dresden University of Technology, D-01062 Dresden, Germany
 Tel: [49] (351) 463-8341Fax: [49] (351) 463-8342
  email: [EMAIL PROTECTED]






Modifying the monomorphism restriction

1999-02-24 Thread John Hughes


Everybody agrees the monomorphism restriction is a pain:

* Often we WANT to make overloaded definitions of the form variable = expr

* The eta-expansion fix is ugly, and only works if the variable has a 
  function type

* Adding a type signature instead is tedious during prototyping, and moreover
  makes the program less robust against changes to types: a change to a type
  elsewhere may invalidate many type signatures, even though it does not
  invalidate the associated definitions.

On the other hand, interpreting such definitions using call-by-name when the
programmer expects call-by-need would REALLY introduce a trap for the unwary.

Some suggest that it is enough for compilers to issue a warning when using
call-by-name. I disagree strongly. Such a warning may alert the programmer
at the time the overloaded definition is compiled. But programmers need to
understand programs at other times also. The person reading through the code
of a library, for example, trying to understand why a program using that
library is so slow or uses so much memory, will not be helped by warnings
issued when the library was compiled. The distinction between call-by-need
and call-by-name is vital for understanding programs operationally, and it
should be visible in the source.

So, let's make it visible, in the simplest possible way. Let there be TWO
forms of binding: x = e, and x := e (say). A binding of the form `x = e' is
interpreted using call-by-name, and may of course be overloaded: it makes `x'
and `e' exactly equivalent. A binding of the form `x := e' is interpreted
using call-by-need, and is monomorphic; `x' behaves as though it were
lambda-bound. Now, for example,

pi = 4*arcsin 1

is an overloaded definition which (visibly) risks duplicated computation,
while

pi := 4*arcsin 1

is a monomorphic definition at a particular instance which (visibly) does not.

Advantages of this idea over the existing MR:

* Monomorphism is decoupled from the syntactic form of the definition. There
  is no need to `eta-convert' definitions to get them into a form that the MR
  does not apply to.

* Monomorphism is decoupled from overloading. With this proposal, x := e is
  always a monomorphic definition, whether the type of e is overloaded or not.
  Thus small changes to e cannot suddenly bring the MR into effect, perhaps
  invalidating many uses of x.

* Monomorphism is decoupled from type inference. One may leave the type of 
  a variable to be inferred regardless of whether it is bound by name or by
  need.

Disadvantages:

* Requires another reserved symbol.

* When converting Haskell 1.x to Haskell 2, many := would need to be inserted.
  Failure to do so could make programs much less efficient. An (optional)
  compiler warning could help here.

An alternative design would be to use := to indicate polymorphism/overloading
in pattern bindings, but retain = for polymorphic function bindings. That
would make conversion from Haskell 1 to Haskell 2 simpler (one would simply
replace = by := in pattern bindings with an overloaded type signature), but is
an unattractively inconsistent design.

John Hughes





TOOLS USA '99 - Last Call for Submissions

1999-02-24 Thread Karen Ouellette

[apologies if you receive multiple copies of this announcement]

**
LAST CALL FOR SUBMISSIONS

  TOOLS USA '99
   Technology of Object-Oriented Languages and Systems
   "DELIVERING QUALITY SOFTWARE"

   Santa Barbara, Calif., August 1-5, 1999
  Fess Parker's Double Tree Resort

 http://www.tools.com/usa
**

Program Chair: Donald Firesmith, Storage Technology Corp., USA
Tutorial Chair: Richard Riehle, AdaWorks, USA
Workshop & Panel Chair: Gilda Pour, San Jose State University, USA
Conference Chair: Bertrand Meyer, ISE, USA

PROGRAM COMMITTEE
Nadia Adhami, Countrywide, USA
Vasu Alagar, Concordia University, Canada
Jan Bosch, University of Karlskrona/Ronneby, Sweden
Benjamin Brosgol, Aonix, USA
Alistair Cockburn, Humans and Technology, USA
Derek Coleman, Hewlett-Packard, USA
Raimund K. Ege, Florida International University, USA
Martin Griss, Hewlett-Packard Laboratories, USA
Brian Henderson-Sellers, University of Sydney, Australia
Laura Hill, Sun Microsystems, USA
Eric Jul, University of Copenhagen, Denmark
Stuart Kent, University of Brighton, UK
Reto Kramer, Cambridge Technology Partners, Switzerland
Qiaoyun Li, Sony Electronics Inc., USA
Robert Marcus, General Motors, USA
John McGregor, Software Architects, USA
James C. McKim, Rensselaer at Hartford, USA
Christine Mingins, Monash University, Australia
Michael Philippsen, University of Karlsruhe, Germany
Reinhold Ploesch, Johannes Kepler University, Austria
Bran Selic, ObjecTime Limited, Canada
Frank Tip, IBM T.J. Watson Research Center, USA
Jeffrey Voas, RST Corporation, USA


TECHNICAL PAPERS
===
TOOLS USA '99 is now soliciting papers on all aspects of object-oriented
technology. All submitted papers will be refereed and judged by
the International Program Committee, not only according to standards of
technical quality but also on their usefulness to practitioners and applied
researchers. TOOLS USA '99 will feature a special emphasis on issues
relating to the challenges of ensuring the quality of delivered
applications. Technical papers that report and assess advances and
experiences in this area are expressly sought.

A non-exhaustive list of topics includes:
- Ensuring the quality of delivered applications throughout the life cycle
- OO verification and testing techniques
- Specification and modeling methods and techniques
- Components, frameworks, and reuse
- Distributed and intelligent objects and agents
- Standardization of languages and methods
- Management, migration, and training issues
- Experience reports with OO technology

In the first phase, an abstract of the paper must be submitted by electronic
mail to [EMAIL PROTECTED] no later than February 26, 1999.
Subsequently, the full paper, in the range of 10 to 20 double-spaced
pages (10,000 to 20,000 words), should be submitted electronically
to [EMAIL PROTECTED] or in hard copy, to arrive no
later than March 5, 1999.

The proceedings of the 30th TOOLS Conference will be published by IEEE
Computer Society Press. Final camera-ready versions of accepted papers will
therefore be required to adhere to the IEEE publication format (guidelines
available soon), and will contain no more than 10 pages.

IMPORTANT DATES:
Electronic abstract submission: February 26, 1999
Manuscript (electronic or hard-copy) submission: March 5, 1999
Notification of acceptance: April 12, 1999
Camera-ready papers due at IEEE: May 7, 1999

PROPOSALS SHOULD BE SUBMITTED TO:
Donald Firesmith
TOOLS USA '99 Program Chair
Storage Technology Corporation
2270 South 88th Street
Louisville, Colorado 80028-5210 USA
Phone: +1-303-661-5943
[EMAIL PROTECTED] (for contact only, see above for
electronic submission addresses)


TUTORIALS
===
TOOLS USA '99 now is soliciting proposals for high quality tutorials. Topics
of high potential interest in the OO field, and not yet covered in other
conferences, are particularly sought for presentation at TOOL USA '99.
Tutorials should be innovative, with a strong practical content, or based on
significant industrial developments, and be of interest to a significant
part of the software community. Tutorials are normally one half-day (three
and a half hours including one break).

Tutorial presenters are entitled to submit an article of a related topic (up
to 10 pages), for inclusion in the TOOLS 30 Conference Proceedings,
published by the IEEE Computer Society (upon Chair's approval). Please note
that you must prepare tutorial notes for the participants.

IMPORTANT DATES:
Tutorial submission deadline: February 26, 1999
Notification of acceptance: March 22, 1999
Camera-ready article: May 7, 1999
Camera-ready notes: June 25, 1999

PROPOSALS SHOULD BE SUBMITTED TO:
Richard Riehle
TOOLS USA '99 Tutorial Chair
AdaWorks, USA
[EMAIL PROTECTED]


WORKSHOPS

The purpose of a

Re: Modifying the monomorphism restriction

1999-02-24 Thread S.M.Kahrs

I just wanted to mention that John's idea
of two different forms of binding, a polymorphic one
with repeated evaluation and a monomorphic one with single evaluation, is not new.
It is also in Xavier Leroy's PhD thesis "Polymorphic Typing of an Algorithmic 
Language",
where he suggests two different let's.

The context there was an ML dialect with references.
So, strict evaluation + side-effects.  In this system
polymorphic references were possible, because the polymorphism
required a re-evaluation of the reference.

Another way to look at it is by making type abstractions
explicit and thinking of a type abstraction as a specific form of closure.
At every type application we evaluate the body of this closure.

Stefan Kahrs






Re: Modifying the monomorphism restriction

1999-02-24 Thread Joe English


John Hughes <[EMAIL PROTECTED]> wrote:
>
> Everybody agrees the monomorphism restriction is a pain:
> [...]
> So, let's make it visible, in the simplest possible way. Let there be TWO
> forms of binding: x = e, and x := e (say). A binding of the form `x = e' is
> interpreted using call-by-name, and may of course be overloaded: it makes `x'
> and `e' exactly equivalent. A binding of the form `x := e' is interpreted
> using call-by-need, and is monomorphic; `x' behaves as though it were
> lambda-bound.
>

This is a good idea, except for the use of ':='.
I'd hate to lose that symbol from the programmers' namespace
just to solve the MR problem.  (Am I the only one who's
never been bitten by the MR restriction?)

How about leaving the 'a = b' binding form as it is,
(monomorphism restriction and all) and using 'a = ~ b'
to indicate call-by-name.  '~' is already a reserved
symbol, and it can't currently appear on the RHS of
a binding, so this new syntax would't break any existing
programs.

This way call-by-need remains the default, and compilers
will still signal an error if the programmer accidentally
bumps into the MR.  If this happens you only need to
twiddle the code to fix it.  For people reading the code,
a ~ on the RHS of a binding is a signal that something
out-of-the-ordinary is going on operationally, the same as
when it appears on the LHS.


--Joe English

  [EMAIL PROTECTED]