Re: Records in Haskell

2012-01-11 Thread Ingo Wechsung
Am 11. Januar 2012 08:42 schrieb Isaac Dupree 
m...@isaac.cedarswampstudios.org:

 On 01/10/2012 05:06 AM, Greg Weber wrote:

 Some of your comments seem to not fully recognize the name-spacing (plus
 simple type resolution) aspect of this proposal that I probably didn't
 explain well enough. Or maybe I don't understand your comments.

 For record.field, field is under the record's namespace. A namespace (from
 a module or under the new system a record), cannot export conflicting
 names, and there this system prevents the importer from every having a
 conflict with a record field name because the field is still under the
 record's namespace when imported. The type system must resolve the type of
 the record, and generally the field cannot contribute to this effort.


 (I have only used Haskell for several years, not implemented Haskell
 several times; apologies for my amateurish understanding of the type
 system.)

 So
 Type inference proceeds assuming that record.field is something
 equivalent to undefined record (using undefined as a function type),
 and the program is only correct if the type of record resolves to a
 concrete type? I don't know if concrete type is at all the right
 terminology; I mean a type-variable doesn't count (whether
 class-constrained, Num a = a, or not, a, or even m Int is not
 concrete).  Is forall a. Maybe a okay (if Maybe were a record)? forall
 a. Num a = Maybe a?  I'm guessing yes.


Exactly. More specific, the type must be of the form T a1 ... an, where T
is a type constructor.
The a_i are not needed for field selection, but of course *if* a field is
found in namespace T, and the construct was r.f then the type checker is
going to check (T.f r), hence the type of r must fit the first argument of
T.f in the usual way. The type of T.f itself is of course already known
(just like that of any other function the currently typechecked function
depends on).



 Does this order of stages (regular scope selection, then type inference,
 then record scope) make as high a fraction of code work as Frege's
 left-to-right model (which I am guessing interleaves type inference and
 record scope selection as it proceeds left-to-right through the program)?


I think that the way it is done in the current Frege compiler (note that
the language does not prescribe any particular order or way of
typechecking) is the one with the worst percentage of hits, because it's
the most simple approach.

-- 

Ingo
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-01-08 Thread Ingo Wechsung
2012/1/8 Gábor Lehel illiss...@gmail.com


 The second is that only the author of the datatype could put functions
 into its namespace; the 'data.foo' notation would only be available
 for functions written by the datatype's author, while for every other
 function you would have to use 'foo data'. I dislike this special
 treatment in OO languages and I dislike it here.


Please allow me to clarify as far as Frege is concerned.
In Frege, this is not so, because implementations of class functions in an
instance will be linked back to the  instantiated types name space. Hence
one could do the following:

module RExtension where

import original.M(R)-- access the R record defined in module original.M

class Rextension1 r where
  firstNewFunction :: .
  secondNewFunction :: .

instance Rextension1 R where
 -- implementation for new functions

And now, in another module one could

import RExtension()  -- equivalent to qualified import in Haskell

and, voilá, the new functions are accessible (only) through R


-- 
Mit freundlichen Grüßen
Ingo
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-01-08 Thread Ingo Wechsung
2012/1/8 Gábor Lehel illiss...@gmail.com

 2012/1/8 Ingo Wechsung ingo.wechs...@googlemail.com:
 
 
  2012/1/8 Gábor Lehel illiss...@gmail.com
 
 
  The second is that only the author of the datatype could put functions
  into its namespace; the 'data.foo' notation would only be available
  for functions written by the datatype's author, while for every other
  function you would have to use 'foo data'. I dislike this special
  treatment in OO languages and I dislike it here.
 
 
  Please allow me to clarify as far as Frege is concerned.
  In Frege, this is not so, because implementations of class functions in
 an
  instance will be linked back to the  instantiated types name space. Hence
  one could do the following:
 
  module RExtension where
 
  import original.M(R)-- access the R record defined in module
 original.M
 
  class Rextension1 r where
firstNewFunction :: .
secondNewFunction :: .
 
  instance Rextension1 R where
   -- implementation for new functions
 
  And now, in another module one could
 
  import RExtension()  -- equivalent to qualified import in Haskell
 
  and, voilá, the new functions are accessible (only) through R

 Ah, I see. And that answers my other question as well about why you
 would special case class methods like this. Thanks. I think I prefer
 Disciple's approach of introducing a new keyword alongside 'class' to
 distinguish 'virtual record fields' (which get put in the namespace)
 from any old class methods (which don't). Otherwise the two ideas seem
 very similar. (While at the same time I still dislike the
 wrong-direction aspect of both.)


Yes, I can see your point here. OTOH, with the x.y.z notation the number of
parentheses needed can be reduced drastically at times.
In the end it's maybe a matter of taste. Frege started as a pure hobby
project (inspired by Simon Peyton-Jones' paper Practical type inference
for higher ranked types), but later I thought it may be interesting for OO
programmers (especially Java) because of the low entry cost (just download
a JAR, stay on JVM, etc.), and hence some aspects are designed so as to
make them feel at home. Ironically, it turned out that the most interest is
in the FP  camp, while feedback from the Java camp is almost 0. Never mind!

-- 
Mit freundlichen Grüßen
Ingo
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


ext-core Questions

2002-12-20 Thread Ingo Wechsung
Dear GH Users,

I have been using the -fext-core option to generate *.hcr Files. I've also
read the document An External Representationfor the Core Language.

There are still some things that confuse me.

Firstly, it seems, that Tuples are sometimes represented for example  for
(a,b) as DataziTuple.Z2T (a) (b) and in other cases as GHCziPrim.Z2H (a)
(b). Why is this?

Secondly, in GHC produced Core programs, one sees frequently references to
intermediate values from other Modules such as SystemziIO.lvl (print
newline?) or GHCziNum.lvl1 (which seems to be an Integer constant) or even
GHCziNum.a4 (which seems to be () :: Integer - Integer - Bool). The type
of those names as well as any other names from imported Modules is not
given, however. How then is it possible to type check a Core program?

(For those who are interested in the background of my question: I wondered
if it would be possible to translate Core to Perl. Let's say we'd translate
all top level values to perl subs. Then I need to know the arity of each top
level value to distinguish non-saturated function applications. For example,
if we have

sub M::foo($$$) { my ($a1, $a2, $a3) = @_; ... }

then, if the core code is like  foo a b the corresponding perl code could
be something like
sub { my $arg3 = shift; M::foo($a, $b, $arg3); }
yielding a reference to an anonymous function that, when applied to another
argument, calls foo.
However, this can only be done, when the signature of foo is known, which is
not the case if foo is imported.)

Merry Chrismas
Ingo
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users