Re: String != [Char]

2012-03-24 Thread Isaac Dupree

On 03/24/2012 02:50 PM, Johan Tibell wrote:

[...]
Furthermore, the memory overhead of Text is smaller, which means that
applications that hold on to many string value will use less heap and
thus experience smaller "freezes" due major GC collections, which are
linear in the heap size.


How is Text for small strings currently (e.g. one English word, if not 
one character)?  Can we reasonably recommend it for that?

This recent question suggests it's still not great:
http://stackoverflow.com/questions/9398572/memory-efficient-strings-in-haskell

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: require spaces around the dot operator

2012-02-11 Thread Isaac Dupree

On 02/11/2012 09:21 PM, Roman Leshchinskiy wrote:

On 12/02/2012, at 02:04, Greg Weber wrote:


I am sorry that I made the huge mistake in referencing future possible
proposals. If this proposal passes, that has no bearing on whether the
other proposals would pass, it just makes them possible.

Please help me fix my error by stopping all discussions of future
proposals and focusing solely on the one at hand.


But if we don't consider those future proposals, then what is the justification 
for this one? It does break existing code so there must be some fairly 
compelling arguments for it. I don't think it can be considered in isolation.


Does it help your concern about breaking existing code to make sure this 
proposal has a LANGUAGE flag? ("-XDotSpaces" or such)


(I'm guessing that helps somewhat but not very satisfactorily; the more 
default and standard it becomes, the more often it tends to break code 
anyway.)


-Isaac

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: require spaces around the dot operator

2012-02-10 Thread Isaac Dupree

On 02/10/2012 06:09 AM, Gábor Lehel wrote:

On Fri, Feb 10, 2012 at 4:42 AM, Isaac Dupree
  wrote:

I support requiring spaces around the dot operator, *even if* we don't ever
end up using it for anything else.


+1.

I would support requiring spaces around _all_ operators. I can't
immediately think of any operator where it would be detrimental, at
least, albeit my memory is not the greatest.


FWIW, it's pretty common to sometimes omit spaces around arithmetic 
operators +, -, *.  This was common enough to derail the idea to make 
negative integer literals be lexed ( "-", no space, numeric literal ). 
So you'd have to fight that battle.


I think spaces around all operators sounds nice in a new language; I'm 
not sure about Haskell.


-Isaac

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: require spaces around the dot operator

2012-02-09 Thread Isaac Dupree
I support requiring spaces around the dot operator, *even if* we don't 
ever end up using it for anything else.


It helps a bit in mentally parsing code, so I try to write that way 
anyway.  So I don't mind making this change.


This change helps us community-wise, having one less issue for us to 
concurrently agonize about as a community while talking about records 
(whether or not we decide to use dot, it makes the conversation less 
complicated).


-Isaac

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: In opposition of Functor as super-class of Monad

2011-01-05 Thread Isaac Dupree
Tony, you're missing the point... Alexey isn't making a complete patch 
to GHC/base libraries, just a hacky-looking demonstration.  Alexey is 
saying that in a class hierarchy (such as if Functor => Monad were a 
hierarchy, or for that matter "XFunctor"=>"XMonad" or Eq => Ord), it is 
still possible to define the superclass functions (fmap) in terms of the 
subclass functions (return and >>=) (such as writing a functor instance 
in which "fmap f m = m >>= (return . f)").  This has always been true in 
Haskell, it just might not have been obvious.


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Functor hierarchy proposal and class system extension proposal

2011-01-04 Thread Isaac Dupree

On 01/04/11 19:48, Ben Millwood wrote:

There's a fair question in whether we want deviation from the default
at all (although I think the answer is probably yes). I think it's
reasonable that any type that is an instance of Monad be forced to
have ap = (<*>), for example, so really the only reason I can see we'd
want to be able to override those functions would be for efficiency.


Remember the example
Monad implies Functor (fmap = Control.Monad.liftM)
Traversable implies Functor (fmap = Data.Traversable.fmapDefault)

e.g. [] and Maybe are instances of all these classes.

yes, liftM and fmapDefault probably must *do* the same thing[*], but one 
of those definitions still needs to be picked.


[*probably--I'm haven't convinced myself that it's true in all cases of 
"deepening"-type class hierarchies though--we are here trying to 
engineer to support all cases of "deepening" hierarchies.]


-Isaac

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: ExplicitForAll complete

2010-11-22 Thread Isaac Dupree

On 11/22/10 06:41, Ian Lynagh wrote:

On Sun, Nov 21, 2010 at 06:25:38PM -0800, Iavor Diatchki wrote:

* It seems that allowing "superflous" values in "foralls" could be
useful for some future extensions.  For example, if we had scoped type
variables and explicit type application, then it may make sense to
have quantified variables that do not appear
in the rest of the type (but do appear in the definition of the
function).  I guess, we could revise things again if that was to ever
happen but still, it seems to me that this might be more appropriate
as an "unused variable" warning, rather then an error?


"Eq a =>  Int" isn't a valid type, so I don't think "forall a . Int"
should be either. As you say, it's possible that future extensions will
generalise this.


In functions with typeclass overloading, the typeclass must be picked in 
order to call the function, even if this means resorting to defaulting.


In functions with parametric polymorphism (no (context)=>), it never 
needs to be decided.  For example, "exitFailure :: IO a" can be called 
on a line where the return value is never used (besides being unified 
with (>>=) types and stuff); it can remain "a".


So I don't think that analogy works for me.  Still not sure whether we 
should allow "forall a . Int" or not (no strong opinions either way).  I 
think it would compile and type-inference fine (although GHC experts may 
correct me.. and/or people familiar with other compiler implementations 
too).



*  Is there any case where an empty "forall" is useful, and if not,
why allow it?  I guess it is a way to make it explicit that a value is
monomorphic but i think that types like "forall. Int" look odd.


I don't mind either way.


It looks odd, but it would be annoying (to tools and otherwise) to 
exclude it from being allowed, even if it's not used much.


P.S. IMHO capitalization, ExplicitForAll vs ExplicitForall, let's stick 
to one.  The extension is written ExplicitForall.


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] Re: Proposal to solve Haskell's MPTC dilemma

2010-05-29 Thread Isaac Dupree

On 05/29/10 21:24, Carlos Camarao wrote:

The situation is as if we a FD:


Well, that is indeed equivalent here in the second argument of class F, 
but I constructed the example to show an issue in the class's *first* 
argument.


Notice you needed to add type-signatures, on the functions you named "g" 
-- in particular their first arguments -- to make the example work with 
only FDs?



module C where
   class F a b | a->b where f :: a ->   b
   class O a where o :: a

module P where
   import C; instance F Bool Bool where f = not
   instance O Bool where o = True
   g:: Bool ->  Bool
   g = f
   k::Bool
   k = g o

module Q where
   import C
   instance F Int Bool where f = even
   instance O Int where o = 0
   g::Int->Bool
   g = f
   k :: Bool
   k = g o


you can inline these "k"-definitions into module Main and it will work 
(modulo importing C).


module Main where
import C
import P
import Q
main = do { print (((f :: Bool -> Bool) o) :: Bool);
print (((f :: Int -> Bool) o) :: Bool) }

These are two different expressions that are being printed, because
" :: Bool -> Bool" is different from " :: Int -> Bool".  In my example 
of using your proposal, one cannot inline in the same way, if I 
understand correctly (the inlining would cause ambiguity errors -- 
unless of course the above distinct type-signatures are added).


If your proposal was able to require those -- and only those -- bits of 
type signatures that were essential to resolve the above ambiguity; for 
example, the ( :: Int) below,

module Q where
   import C
   instance F Int Bool where f = even
   instance O Int where o = 0
   k = f (o :: Int)
, then I would be fine with your proposal (but then I suspect it would 
have to be equivalent to FDs -- or in other words, that it's not really 
practical to change your proposal to have that effect).


I stand by my assertion that "the same expression means different things 
in two different modules" is undesirable, (and that I suspect but am 
unsure that this undesirability is named "incoherent instances").
I'm trying to work out whether it's possible to violate the invariants 
of a Map by using your extension (having it select a different instance 
in two different places, given the same type).. I think, no it is not 
possible for Ord or any single-parameter typeclass, though there might 
be some kind of issues with multi-parameter typeclasses, if the library 
relies on a FD-style relationship between two class type-parameters and 
then two someones each add an instance that together violate that 
implied FD-relationship (which is allowed under your scheme, unlike if 
there was an actual FD).  Er, odd, I need to play with some actual FD 
code to think about this, but I'm too sleepy / busy packing for a trip.


Did any of the above make sense to you?  It's fine if some didn't, type 
systems are complicated... and please point out if something I said was 
outright wrong.



-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal to solve Haskell's MPTC dilemma

2010-05-27 Thread Isaac Dupree

On 05/27/10 17:42, Carlos Camarao wrote:

On Thu, May 27, 2010 at 5:43 PM, David Menendez  wrote:


On Thu, May 27, 2010 at 10:39 AM, Carlos Camarao
  wrote:

Isaac Dupree:

Your proposal appears to allow /incoherent/ instance selection.
This means that an expression can be well-typed in one module, and
well-typed in another module, but have different semantics in the
two modules.  For example (drawing from above discussion) :

module C where

class F a b where f :: a ->  b
class O a where o :: a

module P where
import C

instance F Bool Bool where f = not
instance O Bool where o = True
k :: Bool
k = f o

module Q where
import C
instance F Int Bool where f = even
instance O Int where o = 0
k :: Bool
k = f o

module Main where
import P
import Q
-- (here, all four instances are in scope)
main = do { print P.k ; print Q.k }
-- should result, according to your proposal, in
-- False
-- True
-- , am I correct?


If qualified importation of k from both P and from Q was specified, we
would have two *distinct* terms, P.k and Q.k.


I think Isaac's point is that P.k and Q.k have the same definition (f
o). If they don't produce the same value, then referential
transparency is lost.

--
Dave Menendez
<http://www.eyrie.org/~zednenem/<http://www.eyrie.org/%7Ezednenem/>>



The definitions of P.k and Q.k are textually the same but the contexts are
different. "f" and "o" denote distinct values in P and Q. Thus, P.k and Q.k
don't have the same definition.


Oh, I guess you are correct: it is like defaulting: it is a similar 
effect where the same expression means different things in two different 
modules as if you had default (Int) in one, and default (Bool) in the 
other.  Except: Defaulting according to the standard only works in 
combination with the 8 (or however many it is) standard classes; and 
defaulting in Haskell is already a bit poorly designed / frowned upon / 
annoying that it's specified per-module when nothing else in the 
language is*.(that's a rather surmountable argument)


It may be worth reading the GHC user's guide which attempts to explain 
the difference between incoherent and non-incoherent instance selection,

http://www.haskell.org/ghc/docs/6.12.2/html/users_guide/type-class-extensions.html#instance-overlap
I didn't read both it and your paper closely enough that I'm sure 
anymore whether GHC devs would think your extension would require or 
imply -XIncoherentInstances ... my intuition was that 
IncoherentInstances would be implied...


*(it's nice when you can substitute any use of a variable, such as P.k, 
with the expression that it is defined as -- i.e. the expression written 
so that it refer to the same identifiers, not a purely textual 
substitution -- but in main above, you can't write [assuming you 
imported C] "print (f o)" because it will be rejected for ambiguity. 
(Now, there is already an instance-related situation like this where 
Main imports two different modules that define instances that overlap in 
an incompatible way, such as two different instances for Functor (Either 
e) -- not everyone is happy about how GHC handles this, but at least 
those overlaps are totally useless and could perhaps legitimately result 
in a compile error if they're even imported into the same module.))

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal to solve Haskell's MPTC dilemma

2010-05-26 Thread Isaac Dupree

On 05/26/10 15:42, Carlos Camarao wrote:

What do you think?


I think you are proposing using the current set of instances in scope in 
order to remove ambiguity.  Am I right?  ..I read the haskell-cafe 
thread so far, and it looks like I'm right.  This is what I'll add to 
what's been said so far:


Your proposal appears to allow /incoherent/ instance selection.  This 
means that an expression can be well-typed in one module, and well-typed 
in another module, but have different semantics in the two modules.  For 
example (drawing from above discussion) :


module C where
class F a b where f :: a -> b
class O a where o :: a

module P where
import C
instance F Bool Bool where f = not
instance O Bool where o = True
k :: Bool
k = f o

module Q where
import C
instance F Int Bool where f = even
instance O Int where o = 0
k :: Bool
k = f o

module Main where
import P
import Q
-- (here, all four instances are in scope)
main = do { print P.k ; print Q.k }
-- should result, according to your proposal, in
-- False
-- True
-- , am I correct?

Also, in your paper, example 2 includes

m = (m1 * m2) * m3

and you state

In Example 2, there is no means of specializing type variable c0 occurring in 
the
type of m to Matrix.


I suggest that there is an appropriate such means, namely, to write
m = (m1 * m2 :: Matrix) * m3
.  (Could the paper address how that solution falls short?  Are there 
other cases in which there is more than just a little syntactical 
convenience at stake?, or is even that much added code too much for some 
use-case?)


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: PROPOSAL: Include record puns in Haskell 2011

2010-02-24 Thread Isaac Dupree

On 02/24/10 13:40, Martijn van Steenbergen wrote:

Ian Lynagh wrote:

I have a feeling I'm in the minority, but I find record punning an ugly
feature.

Given
data T = C { f :: Int }
we implicitly get
f :: T -> Int
which punning shadows with
f :: Int
whereas I generally avoid shadowing completely.


I agree with Ian.


I tend to agree.

I don't mind if a few files that use a ton of label-operations are 
marked with NamedFieldPuns and use that feature a lot.  But, funnily, if 
it were put in the standard then it would be enabled in un-marked source 
files, and then I personally wouldn't like it as much. (Any decision is 
acceptable to me though.)


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Unsafe hGetContents

2009-10-11 Thread Isaac Dupree
Hmm, Don't you think forkIO deserves some of the same complaints as 
unsafeInterleaveIO?  Things happen in a nondeterministic order!


I think what irritates us about unsafeInterleaveIO is that it's IO that 
tinkers with the internals of the Haskell evaluation system.  The OS 
can't do it: in a C program it might, because there's libc and debuggers 
and all kinds of things that understand compiled C to some extent.  But 
the Haskell runtime system is pretty much obfuscated to anyone except 
ourselves.  This obfuscation maintains its conceptual purity to a 
greater extent than is really guaranteed by the standards.  This 
obfuscation is supported in our minds by the fact that functions (->) 
cannot be compared for equality or deconstructed or serialized in any 
way, only applied.


forkIO causes IO to happen in a nondeterministic order.  So does 
unsafeInterleaveIO.  But for unsafeInterleaveIO, the nondeterminism 
depends in part on how pure functions are written: partly because there 
is a compiler that makes arbitrary choices, and also partly affected by 
the strictness properties of the functions.  This feels disconcerting to 
us.  And worse: I am not sure if forkIO has a formal guarantee that its 
IO will complete, but we tend to assume that it will, sooner or later; 
unsafeInterleaveIO might not happen at all, and frequently does not, due 
to the observations of how pure functions are written.


It's disconcerting.  It can affect how we choose to write our pure code, 
the same way that efficiency and memory concerns can.  But if 'catch' 
can catch a different exception depending even, conceptually, on the 
phase of the moon, it is a similarly strange stretch to imagine 
unsafeInterleaveIO doing so.  It plays with chronology (like forkIO 
does) and with the ways Haskell functions are written (like 'catch' 
does) at the same time.


A result is that it makes a lot of people confused when they do 
something they didn't intend with it.  Also, it's a powerful enough tool 
that when you want to replace its formal nondeterminism with something 
more precise, you may have quite a bit of work cut out for you, 
restructuring your code (like Darcs did, IIRC).


-Isaac

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Standarize GHC.Prim.Any

2009-09-05 Thread Isaac Dupree

Maciej Piechotka wrote:

On Thu, 2009-08-20 at 19:59 +0200, Maciej Piechotka wrote:

Sorry for delay in responding.


Any from GHC.Prim makes unsafeCoerce much useful and safe. Can it be
included in Unsafe.Coerce module?

Pros:
- unsafeCoerce is much more useful with Any (it's a safe placeholder for
any value and therefore can be passed simply in/out FFI).

Yes it's a safe placeholder, but so is an existential, I believe...
data ContainsAny = forall a. ContainsAny a
to put in the container:
"ContainsAny x"
to remove from the container:
"case ... of ContainsAny x -> unsafeCoerce x"
Albeit, there might be a slight performance penalty for the extra box, 
which can't be removed in all haskell implementations.


Also, what do you mean about "FFI"?  I don't think you can pass the 
"Any" type to foreign functions through the FFI...


Also, can/do all compilers that implement unsafeCoerce implement a safe Any?
Hugs can do it with just "data Any = Ignored" I believe, not sure about 
nhc, yhc or jhc...


-Isaac

Well. May be I have one specific problem which seems to not be possible
to be implemented in portable way (I'm not sure if it is possible
outside ghc). Sorry for mentioning FFI without further explanation.

The problem arise if one would like to implement GClosure from glib.
Once it was implemented in gtk2hs in C using RTS API. I also have my own
implementation in Haskell which is doing something (removed IO monad for
clarity):
applyValue :: Any -> Value -> Any
applyValue f v =
case funsamentalTypeOf v of
Type1 -> (unsafeCoerce f :: HaskellType1 -> Any) $ get v
...

Such trick is (looks for me?) safe as a -> b -> ... -> d can be
represented by Any. However I cannot safely cast to function a -> Any.

To/from FFI it is passed in Ptr (StablePtr Any).

Regards
PS. I assume that it is not possible as it was done in importable was in
gtk2hs. 


With any known from the beginning number of parameters function and GADT
one can write:

data Closure where
Closure0 :: IO a
Closure1 :: a -> IO b
Closure2 :: a -> b -> IO c
-- ...


That isn't GADT syntax... results of "Constructor :: ..." must be 
"Closure" in this case.  What are you trying to do here?  An alternative 
for "Ptr (StablePtr Any)"?  Is "Ptr (StablePtr Dynamic)" sufficient, 
where Dynamic from Data.Dynamic is represented in an 
implementation-specific way equivalent to

data Dynamic = forall a. Typeable a => DynamicConstr a
a.k.a.
data Dynamic where DynamicConstr :: Typeable a => a -> Dynamic
?
The main limitation would be if you want to include a type that's not in 
Typeable.  Is that common, to have types that aren't in Typeable?


It would be more annoying but you could add extra concrete-type 
constructors to a data, for every possibility, like

  DynamicConstr3 :: (Int, Bool) -> MyDynamic
if for some reason (Int, Bool) weren't in Typeable... Would be silly to 
do, however.  Just put things in Typeable?  Or, if you know it'll be 
type-correct, just use ContainsAny as above, with "Ptr (StablePtr 
ContainsAny)"?


It might not be absolutely the most optimized, but portable standard 
mechanisms were never the place for that kind of hacking.  And it'll be 
good enough.  Right?


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Standarize GHC.Prim.Any

2009-08-17 Thread Isaac Dupree

Maciej Piechotka wrote:

I'm so sorry if I've done it wrong - it's my first contact with Haskell
standardization and I base on
http://hackage.haskell.org/trac/haskell-prime/wiki/Process . I hope it
is not too late as well.


Do not worry about that, we have decided to have a yearly 
standardization process.



Any from GHC.Prim makes unsafeCoerce much useful and safe. Can it be
included in Unsafe.Coerce module?

Pros:
- unsafeCoerce is much more useful with Any (it's a safe placeholder for
any value and therefore can be passed simply in/out FFI).


Yes it's a safe placeholder, but so is an existential, I believe...
data ContainsAny = forall a. ContainsAny a
to put in the container:
"ContainsAny x"
to remove from the container:
"case ... of ContainsAny x -> unsafeCoerce x"
Albeit, there might be a slight performance penalty for the extra box, 
which can't be removed in all haskell implementations.


Also, what do you mean about "FFI"?  I don't think you can pass the 
"Any" type to foreign functions through the FFI...


Also, can/do all compilers that implement unsafeCoerce implement a safe Any?
Hugs can do it with just "data Any = Ignored" I believe, not sure about 
nhc, yhc or jhc...


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposals and owners

2009-08-08 Thread Isaac Dupree

Ross Paterson wrote:

On Sat, Aug 08, 2009 at 10:09:38AM +0100, Iavor Diatchki wrote:

I thought that the intended semantics was supposed to be that the only
element is bottom (hence the proposal to add a related empty case
construct)?


If that were the case, a compiler could legitimately discard any value
of such a type, because it could be easily reconstructed.  I don't
think that is what is intended.


Actually, I think it is.  I think that's a natural consequence of the 
way Haskell is specified.  GHC tries to pick the kind of bottom that you 
expected, but it doesn't always work really well, because it's not 
actually specified in any sort of formal way...


Now, with imprecise exceptions, I'm not sure a compiler could 
legitimately discard the value.


(by the way, for a type that you can unsafeCoerce anything to, GHC has a 
special type named "Any". Which is not the same as a data type with no 
constructors.)


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: NoMonomorphismRestriction

2009-08-06 Thread Isaac Dupree

The paper makes the (somewhat radical) case for not generalising local bindings 
at all; which would at a stroke remove most of the issues of the MR.  (We'd 
still need to think about the top level.)

We'd love to know what any of you think of the idea.


I read the paper (except section 5 which is very technical).

I like that it makes
(let x = ... in ...)
behave the same as
(\x -> ...) (...)
. Understanding how to respond to type inference and error messages is 
hard enough without having additional differences in innocent-looking 
code.  Do you think my hope is reasonable that not-generalizing could 
lead to better error messages?  I don't quite understand the issues[*], 
but I suspect that not-generalizing would at least make *me* less 
confused when fixing error messages because there are fewer different 
typechecker behaviors to think about.  I guess it's still possible to 
use explicit type-signatures to make let-bindings polymorphic, in a way 
that is difficult or impossible for lambda or case? (I guess for lambda, 
it would require making the lambda into a rank-2 function, though I'm 
not sure how to do that syntactically.)


[*] e.g., the gmapT / rank-2 example confuses me; would it work if 
(...blah...) were passed directly to gmapT without the let?


Also, does it happen to solve the 2^n worst-case typechecking?

-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: bug in language definition (strictness)

2009-08-06 Thread Isaac Dupree

Simon Marlow wrote:
I'm against the change originally proposed in this thread, because on 
its own it doesn't make any difference.


I wonder if in practice, if we modified the definition of $! in the GHC 
libraries, whether GHC would compile things differently enough that we 
would notice the difference.  Probably not often enough to really notice...


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: FirstClassFieldUpdates

2009-07-28 Thread Isaac Dupree

Jon Fairbairn wrote:

Isaac Dupree

writes:


Jon Fairbairn wrote:

Parenthesis around updates would make them into functions, ie
({a=1,b=2,...}) would mean the same as (\d -> d{a=1,b=2,...}), but be
more concise.

yes it is, however field updates are occasionally slightly
annoying, since they can't change something's type at all,
IIRC.  Say,
data C nx ny = C { x :: nx, y :: ny }
x_set :: nx2 -> C nx1 ny -> C nx2 ny
--x_set x2 c = c {x = x2}  --type error


If that were the case, it would be a serious wart that needed to be
addressed. However, implementation would be fairly straightforward as
an "extension" is already present in ghc:


oh maybe I got confused.  (My confusion also could have been the result 
of a bug that was recently fixed in GHC that affected the type of some 
cases like that where there are multiple constructors...)


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: FirstClassFieldUpdates

2009-07-27 Thread Isaac Dupree

Jon Fairbairn wrote:

Parenthesis around updates would make them into functions, ie
({a=1,b=2,...}) would mean the same as (\d -> d{a=1,b=2,...}), but be
more concise.


yes it is, however field updates are occasionally slightly annoying, 
since they can't change something's type at all, IIRC.  Say,

data C nx ny = C { x :: nx, y :: ny }
x_set :: nx2 -> C nx1 ny -> C nx2 ny
--x_set x2 c = c {x = x2}  --type error
--x_set x2 = ({x = x2})  --still a type error
x_set x2 c = C {x = x2, y = y c} --legal

Which is possibly a reason to stay away from field-update syntax on some 
occasions, and therefore not want it to get a more prominent place in 
the language if it doesn't deserve it yet.


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: StricterLabelledFieldSyntax

2009-07-26 Thread Isaac Dupree

Iavor Diatchki wrote:

Hello,
I am strongly against this change.  The record notation works just
fine and has been doing so for a long time.  The notation is really
not that confusing and, given how records work in Haskell, makes
perfect sense (and the notation has nothing to do with the precedence
of application because there are no applications involved).  In short,
I am not sure what problem is addressed by this change, while a very
real problem (backwards incompatibility) would be introduced.
-Iavor


a different approach to things that look funny, has been to implement a 
warning message in GHC.  Would that be a good alternative?


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: StricterLabelledFieldSyntax

2009-07-26 Thread Isaac Dupree

Sean Leather wrote:

To me, the syntax is not actually stricter, just that the precedence for
labeled field construction, update, & pattern is lower. What is the
effective new precedence with this change? Previously, it was 11 (or simply
"higher than 10"). Is it now equivalent to function application (10)?


maybe it's equivalent "infix 10" (not infixr/infixl) so that it doesn't 
associate with function application (or itself) at all, either left- or 
right- ly.  I didn't understand by reading the patch to the report...


Ian Lynagh wrote:

I think that even an example of where parentheses are needed would be
noise in the report. I don't think the report generally gives examples
for this sort of thing, e.g. I don't think there's an example to
demonstrate that this is invalid without parentheses:
id if True then 'a' else 'b'


Well that's also something that in my opinion there *should* be an 
example for, because IMHO there's no obvious reason why it's banned 
(whereas most of the Report's syntax repeats things that should be 
obvious and necessary to anyone who knows Haskell).


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: StricterLabelledFieldSyntax

2009-07-26 Thread Isaac Dupree

Jon Fairbairn wrote:

Ian Lynagh  writes:

http://hackage.haskell.org/trac/haskell-prime/wiki/StricterLabelledFieldSyntax


I approve of the principle -- the binding level is confusing, but I
would far rather make a bigger change, so that rather than being
confusable with the binding level of function application, it /has/ the
binding level of function application. ie, instead of a{x=42} one would
have to write {x=42}a,


we already know which record type it is, because record fields don't 
have disambiguation.
If it's (data D = D { x, y :: Int }) then (x :: D -> Int) and we would 
have (({x=42}) :: D -> D).
Or (data E n = E1 { ex, ey :: n } | E2 { ey :: n } | E3 {ex :: n}), (ey 
:: E n -> n), (({ex=42}) :: Num n => E n -> E n), but probably not ever 
allowing to change (E n1 -> E n2) even if it changes both ex and ey.


I think it wouldn't be a terrible syntax, ({...}), kind of like infix 
operators can be made into functions like (+).  If you wanted to make a 
proposal for such an extension.


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: StricterLabelledFieldSyntax

2009-07-25 Thread Isaac Dupree

Ian Lynagh wrote:

Hi all,

I've made a ticket and proposal page for making the labelled field
syntax stricter, e.g. making this illegal:

data A = A {x :: Int}

y :: Maybe A
y = Just A {x = 5}

and requiring this instead:

data A = A {x :: Int}

y :: Maybe A
y = Just (A {x = 5})


and, as currently, "(f some expression) {x=5}" still requires those 
parentheses also?  Although depending on the surroundings, after this 
proposal, it might need to become "((f some expression) {x=5})"


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Haskell 2010: libraries

2009-07-08 Thread Isaac Dupree

Heinrich Apfelmus wrote:

If I understand that correctly, this would mean to simply include the
particular version of a library that happens to be the current one at
the report deadline. In other words, the report specifies that say
version 4.1.0.0 of the base library is the standard one for 2010.

Since old library versions are archived on hackage, this looks like a
cheap and easy solution to me. It's more an embellishment of alternative
1. than a genuine 3.


It could be a mere informative reference: "the most-community-accepted 
libraries at the time of publication are:".


Keep in mind also that some of the libraries change irrevocably (like 
base has; with changes like Unicode I/O, or adding the Category class 
above Arrow.  When it is tied to a particular compiler version it's less 
trivial to support multiple libraries... or even when it's not, version 
skew can get annoying


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell] Announcing the new Haskell Prime process, and Haskell 2010

2009-07-07 Thread Isaac Dupree

Isaac Dupree wrote:

Simon Marlow wrote:

Remove n+k patterns


oh also -- anything like this that we remove should get a LANGUAGE flag 
to go along with it.  I don't see NPlusKPatterns in 
Language.Haskell.Extension yet :-)


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell] Announcing the new Haskell Prime process, and Haskell 2010

2009-07-07 Thread Isaac Dupree

Simon Marlow wrote:

Remove n+k patterns
remove FixityResolution from the context-free grammar


There are a couple sensible removals here.  Do we also want to get rid 
of the useless class contexts on data-declarations? (that look like 
"data Ord a => Set a = Set ...")


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: what about moving the record system to an addendum?

2009-07-07 Thread Isaac Dupree

Malcolm Wallace wrote:


On 7 Jul 2009, at 02:28, John Meacham wrote:


Haskell currently doesn't _have_ a record syntax (I think it was always a
misnomer to call it that) it has 'labeled fields'. ...

and a reworking of the standard to not refer to the current system as a
'record syntax' but rather a 'labeled fields' syntax.


I strongly agree with the latter.  In fact, I was under the impression 
that the Report already avoided the term "record syntax" completely, but 
checking just now showed 6 distinct occurrences.


ah hah.  Existing extension names:
NamedFieldPuns (was erroneously "RecordPuns" in GHC for a release)
RecordWildCards, DisambiguateRecordFields

this "extension" could be named NamedFields.
(then giving the lie to the above new names which maybe ought to be more 
like FieldWildCards and DisambiguateNamedFields(DisambiguateFieldNames?))


Also there is "ExtensibleRecords" which I guess refers to Hugs' TRex?

-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: Deprecate ExistentialQuantification

2009-06-27 Thread Isaac Dupree

Niklas Broberg wrote:

  data Foo =
forall a . Show a => Foo a

which uses ExistentialQuantification syntax, could be written as

  data Foo where
Foo :: forall a . Show a => a -> Foo



The downside is that we lose one level of granularity in the type
system. GADTs enables a lot more semantic possibilities for
constructors than ExistentialQuantification does, and baking the
latter into the former means we have no way of specifying that we
*only* want to use the capabilities of ExistentialQuantification.


Is it easy algorithmically to look at a GADT and decide whether it has 
only ExistentialQuantification features?  After all, IIRC, hugs and nhc 
support ExistentialQuantification but their type systems might not be up 
to the full generality of GADTs.  (GHC's wasn't even quite up to it for 
quite a long time until around 6.8, when we finally got it right.)


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Suggestion: Syntactic sugar for Maps!

2008-11-27 Thread Isaac Dupree

Thomas Davie wrote:


On 27 Nov 2008, at 19:59, circ ular wrote:


I suggest Haskell introduce some syntactic sugar for Maps.

Python uses {"this": 2, "is": 1, "a": 1, "Map": 1}

Clojure also use braces: {:k1 1 :k2 3} where whitespace is comma but
commas are also allowed.

I find the import Data.Map and then fromList [("hello",1), ("there",
2)] or the other form that I forgot(because it is to long!) to be to
long...

So why not {"hello": 1, "there": 2} ?


let's look at your argument.  I'll ignore spaces in 
character counts because they're all optional here.

fromList [("hello", 1), ("there", 2)]
 { "hello": 1 ,  "there": 2 }
Basically there are two overhead:
- "fromList", or "Data.Map.fromList". I'll get back to this 
later.
- that the pair-tuple constructor uses three characters "(", 
",", ")" rather than Python's one ":".
The latter is easy to fix!  ":" already means something in 
Haskell, so we'll have to pick something else.

a & b = (a, b)
fromList ["hello" & 1, "there" & 2]
or we could start to get creative, for aesthetic purposes... 
["hello"? 1, ], or ["hello":-1, ] to define new data (or 
type synonym?) ":-".


Now comes "fromList" and the issue of monomorphism of normal 
lists.  Well, you're going to have to (or want to) specify 
the type you want at some point, perhaps.  The name of the 
function could be less ugly than "fromList".


If you expect pattern-matching, there's another can of 
worms, and I suggest looking into GHC's new lightweight 
"view patterns" (I wonder if they can really do this well, 
though? -- because fromList/toList sorts.)

http://www.haskell.org/ghc/docs/6.10.1/html/users_guide/syntax-extns.html#view-patterns

In a similar vein, I suggest not only to not do this, but also for 
Haskell' to remove syntactic sugar for lists (but keep it for strings)!


not too hard to cope with, though ugly IMHO if we translate 
it the obvious way:

fromList ("hello" & 1 : "there" & 2 : [])


[...]
3) (requiring you to agree with my opinions about tuples) it would allow 
for clearing up the tuple type to be replaced with pairs instead.  (,) 
could become a *real* infix data constructor for pairs.  This would make 
us able to recreate "tuples" simply based on a right associative (,) 
constructor.  I realise there is a slight issue with strictness, and (,) 
introducing more bottoms in a "tuple" than current tuples have, but I'm 
sure a strict version of the (,) constructor could be created with the 
same semantics as the current tuple.


you are right, if you follow a simple asymmetric restriction 
(which, alas, keeps you from using them as ordinary pairs) :

data Tup a b = Tup a !b
type Tup0 = ()
type Tup1 a = Tup a ()
type Tup2 a b = Tup a (Tup b ())
type Tup3 a b c = Tup a (Tup b (Tup c ()))
--or,
infixr 5 Tup
type Tup2 a b = a `Tup` b `Tup` ()
--etc.


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: .. Add simonpj's ImportShadowing proposal

2008-11-27 Thread Isaac Dupree

http://hackage.haskell.org/trac/haskell-prime/wiki/ImportShadowing

I agree.  It is very tiresome and confusing, because when 
you say, in your module "M", "M.nub", M doesn't necessarily 
even export nub, nor did you "import M as M", so it's an odd 
sort of self-reference.  Also that self-reference is banned 
in some places (maybe, left-hand-sides of definitions?) and 
required in others (when used when also imported).  The 
"required" part would be mitigated by the proposal.


It's probably somewhat more worth warning about when the 
import was explicit as well as unqualified, e.g.

import Data.List (nub)
than from merely
import Data.List

But it's a little confusing, because you can define methods 
in instances (e.g. Category's id and (.)), but I don't think 
instance definitions would be affected by the proposal -- 
only class definitions would.


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Repair to floating point enumerations?

2008-10-16 Thread Isaac Dupree

Christopher Lane Hinson wrote:


I agree with David, we should be using multiplication, not addition.
However, I think that under the law of least surprise, we should
require that for all a,b,z:

all (\x -> x >= a && x < z || x <= a && x > z) [a,b..z].


so that [0,0.1..0.3] doesn't include the terminating value 
that's a little more than the literal 0.3?


For example, anything in the neighborhood of this is just unfair, even 
if it's within David's fudge factor:


Prelude> map (\x -> 1 / (x-0.6)) [0,0.1..0.55]
[-1.6667,-2.0,-2.5,-3.334,-5.001,-10.002,Infinity] 


but that's a significant fudge, 0.5 versus 0.55 versus 0.6 
-- right?


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Mutually-recursive/cyclic module imports

2008-08-17 Thread Isaac Dupree

Isaac Dupree wrote:

Duncan Coutts wrote:

[...]

I'm not saying it's a problem with your proposal, I'd just like it to be
taken into account. For example do dependency chasers need to grok just
import lines and {-# SOURCE -#} pragmas or do they need to calculate
fixpoints.


Actually, good point, Duncan, that got me thinking about 
what we need in order to obviously not to lose much/any of 
the .hs-boot efficiency.  (warning: another long post ahead, 
although the latter half of it is just an example from GHC's 
source) [and I re-read my post and wasn't sure about a few 
things, but maybe better to get feedback first -- please 
tell me if I'm being too verbose somewhere, too]


Let's look at the total imports of a .hs and its .hs-boot, 
as they currently are for GHC.  Either can be non-SOURCE 
imports (let's call them NOSOURCE), SOURCE imports, or not 
importing that.

.hs:NOSOURCE, .hs-boot:NOSOURCE : okay
.hs:NOSOURCE, .hs-boot:SOURCE : okay
.hs:NOSOURCE, .hs-boot:not-imported : okay
.hs:SOURCE, .hs-boot:NOSOURCE : bad, if the .hs needs 
SOURCE, then probably so does the .hs-boot

.hs:SOURCE, .hs-boot:SOURCE : okay
.hs:SOURCE, .hs-boot:not-imported : okay
- the .hs-boot importing a module that the .hs doesn't is 
invalid, or at least useless [actually, see later example -- 
there may be reasons for this, but in that case, it doesn't 
hurt to also import the module in the .hs (assuming there's 
no syntactic/maintenance burden), and it provides better 
automatic error-checking to do so]


Given the limited amount of information a .hs-boot file (or 
SOURCE-imported file, in my scheme) needs for being a 
boot-file, there is no advantage to import the modules it 
depends on as NOSOURCE.  The compiler just has to be clever 
enough to ignore imports of functions that it can't find out 
the type of.  Also, currently using SOURCE requires the 
imported module to have a .hs-boot.  But it should work fine 
to look for a .hi and use that in the absence of .hi-boot, 
because it has strictly a superset of the information (so 
that my statement that "SOURCE is superior to NOSOURCE when 
it works" can be truer, for the sake of demonstration). 
[oops! I was wrong, it may need to NOSOURCE-import on 
occasion to find out a function's type - more on that in a 
later post?]


Now, since the .hs-boot SOURCE vs NOSOURCE has been 
collapsed, I think we can move mostly-all .hs-boot info into 
the .hs file.  If the .hs-boot file had imported something, 
the corresponding import in the .hs is imported with 
{-#SOURCE_FOLLOW#-} (in addition to {-#SOURCE#-} or 
{-#NOSOURCE#-}); otherwise it's imported with 
{-#SOURCE_NOFOLLOW#-} (ditto).  For demonstration, I'll 
assume that all imports are annotated this way, with two 
bits of information.  Presumably all imports that aren't 
part of an import loop are NOSOURCE (which includes all 
cross-package imports).


Now let's look at the dependency chaser.
NOSOURCE imports must not form a loop.  They form dependency 
chains as normal.
SOURCE imports depend on either a .hi or a .hi-boot for the 
imported module.

When a X.hi-boot is demanded:
only SOURCE_FOLLOW imports are dependency-chased from X.hs, 
through any .hs modules that don't already have a .hi or 
.hi-boot.
In the case where .hs-boots worked, this *can* avoid cycles. 
 If this SOURCE_FOLLOW dependency DAG doesn't have any 
cycles, then it should be as simple as calling (the 
fictional) `ghc -source X.hs` to produce X.hi.  If there are 
cycles, and it is sometimes necessary*, GHC needs to be 
slightly smarter and be able to produce all the .hi-boot 
files at once from any graph SCCs (loops) that prevent it 
from being a DAG (e.g., `ghc -source X.hs Y.hs` to produce 
X.hi-boot and Y.hi-boot).  Note that it doesn't need to be 
particularly smart here -- e.g., no type inference is done.


*necessary loops:
example 1, the data/declarations literally loop:
module X1 where
{ import Y1(Y); data X a = End a | Both Y Y; }
module Y1 where
{ import X1(X); data Y = Only (X (Maybe Y)); }
(or kind annotations could be required for these loops in 
general, e.g. data X (a :: *) = ...)
[hmm, in this case actually all we need is the data 
left-hand-side, so we could do this in two stages.  But that 
wouldn't work out so well if their RHSs contained 
{-#UNPACK#-}!SomeNewtypeForInt where SomeNewtypeForInt was 
from the other module.  But that's an optimization that it 
might be okay not to do, as long as it was consistently not 
done both for .hi-boot and .hi/.o; and it could perhaps be 
doable]


example 2, there are just too many back-and-forths:
module X2 where
{ import Y2(Yb); data Xa = Xa; data Xc = Xc Yb; }
module Y2 where
{ import X2(Xa,Xc); data Yb = Yb Xa; data Yd = Yd Xc; }
This second one "could" also be accomplished if multiple 
different .hs-boots were allowed per .hs,
although it doesn't seem worth th

Re: Mutually-recursive/cyclic module imports

2008-08-16 Thread Isaac Dupree

Duncan Coutts wrote:

[...]

I'm not saying it's a problem with your proposal, I'd just like it to be
taken into account. For example do dependency chasers need to grok just
import lines and {-# SOURCE -#} pragmas or do they need to calculate
fixpoints.


Good point.  What does the dependency chaser need to figure out?
- exactly what dependency order files must be compiled 
(e.g., ghc -c) ?
- what files (e.g., .hi) are needed to be findable by the 
e.g. (ghc -c) ?

- recompilation avoidance?

-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: empty case, empty definitions

2008-08-15 Thread Isaac Dupree

Neil Mitchell wrote:

Sounds bad. Consider:

gray :: Color
grey = newColor "#ccc"


My rationale for typoes not being a problem (both your 
example, and the one Malcolm Wallace posted to the "empty 
case" ticket) is that GHC will give you a warning anyway 
(and that warning should be on by default).  Should we be 
worrying about the situation being worse for other compilers 
that don't have good warning-systems (e.g. I don't think 
Hugs has warnings at all)?


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Mutually-recursive/cyclic module imports

2008-08-15 Thread Isaac Dupree

Isaac Dupree wrote:
In the case of the proposed SOURCE imports without hs-boot files, GHC 
would ...


Ah, another difference from the .hs-boot system: in my 
proposal, when a file is imported with SOURCE and dependency 
chasing (e.g. of data-types) is done through its imports, it 
won't make a difference whether those imports have SOURCE 
pragmas; the compiler is in SOURCE-mode already, and will 
look at .hi files if there are any up-to-date ones available 
(e.g. the imported module isn't in the SCC / import loop), 
and otherwise will look at the source code (if it wanted, it 
could make some sort of .hi-boot out of it, I suppose).


As opposed to the .hs-boot mechanism where .hs-boot files 
must choose carefully (and perhaps differently to the 
corresponding .hs file) whether their imports use SOURCE 
(they must if it's necessary to prevent loops, but must not 
if that module doesn't have a .hs-boot file that contains 
what's needed! But sometimes it doesn't make a difference, 
except for recompilation!)


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


empty case, empty definitions

2008-08-15 Thread Isaac Dupree
There are two separate parts I propose, the second one I'm 
less sure of, but they're somewhat related.


1. Allow empty case, i.e. "case some_variable of { }" (GHC 
ticket [1]).  This adds consistency, it always causes a 
pattern-match error, and it is a sensible way to look at all 
the cases of types with no constructors (recall 
EmptyDataDecls will probably be in Haskell' [4]) -- 
especially for automatic tools (or programmers familiar with 
dependent types; GADTs have some of these effects :-)). 
Presumably, any time that some_variable could be non-bottom, 
GHC will warn about the incomplete patterns :-).


2. When a type signature for a function is given, allow to 
not define any patterns (GHC ticket [2]).  The result of 
calling the function is a pattern match failure (presumably 
the source-location given for the match failure will be the 
location of the type-signature).  This can also be useful 
for calling functions before implementing them, helping the 
type-checker help me do incremental work (again, obviously 
produces a warning if the function could possibly be 
non-bottom).


However I can think of a few things this (proposal 2) could 
interfere with:
2.i. Implementing a class method, will you get the default 
if that method has a default?  Actually it turns out to be 
forbidden...

class C n where
  foo :: n -> n
  foo = id
instance C Int where
  foo :: Int -> Int
  --even if we define foo here too, it's an error:
  --misplaced type signature (perhaps thanks to improved
  --error messages, thanks simonpj! [5]).
  --Anyway, I think type signatures ought to be allowed here.
I propose to allow type-signatures in instances, which must 
be equivalent to the signature in the class declaration 
after the class's signature is specialized to the particular 
instance type(s).  If such a type-signature is found, allow 
the function to be defined as normal, which includes, if 
there are no patterns, an error if proposal 2 isn't adopted, 
and a pattern-match failure if proposal 2 is adopted.
(also it turns out that pattern bindings aren't allowed in 
instances, such as {instance C Int where (foo) = negate}, 
but I can't say I have a compelling use-case for that!:-))


2.ii. It could interfere with another feature request of 
mine (though I'm not sure I want it anymore) (GHC ticket 
[3]) : I'd like it to be allowed to give a (possibly more 
restrictive?) type signature at the top level of a module, 
to a function imported unqualified.  Obviously in this case 
I don't want the function to be treated as pattern-match 
failure; but I think we can tell the difference because the 
name is in-scope in this case. Luckily there is no negative 
interaction with my related proposal to simply allow 
multiple equivalent type-signatures anywhere one of them is 
allowed in a declaration-list.


So actually, in summary I can't really see anything wrong 
with proposal 2, especially if my proposal under 2.i. is 
adopted.


[1] http://hackage.haskell.org/trac/ghc/ticket/2431
[2] http://hackage.haskell.org/trac/ghc/ticket/393
[3] http://hackage.haskell.org/trac/ghc/ticket/1404
[4] 
http://hackage.haskell.org/trac/haskell-prime/wiki/EmptyDataDecls

[5] http://hackage.haskell.org/trac/ghc/ticket/1310
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Mutually-recursive/cyclic module imports

2008-08-15 Thread Isaac Dupree

Ian Lynagh wrote:

I'm not sure if defaulting actually makes this worse, but regardless, I
think we should seriously consider removing defaulting anyway:

http://hackage.haskell.org/trac/haskell-prime/wiki/Defaulting#Proposal4-removedefaulting


Oh, actually, I agree with that proposal to remove 
defaulting.  Maybe we should try implementing that and see 
how much things break.  I imagine most uses can be solved 
by, if nothing else, adding local functions with 
more-constrained types, a bit similar to the (^) change.


I noticed that depending on the resolution of
http://hackage.haskell.org/trac/haskell-prime/wiki/KindInference
, we might have a different sort of defaulting that examines 
exactly a whole module (which could also make it harder for 
my cyclic-module proposal to avoid recompilation? not sure)


If we remove defaulting and the monomorphism restriction 
*and* don't add any other per-module semantics, then we get 
the module system out of the way of the semantics, which 
would make me very happy!  There are a few GHC extensions 
that are still unfortunately per-module -- e.g. 
OverlappingInstances perhaps ought to be a notation or 
pragma on a class, rather than affecting all classes that 
happen to be defined in the module.  (Pragmas aren't 
supposed to have an effect if they're not recognized; but 
sometimes people put OverlappingInstances on a class not 
because they're planning to make any such instances, but to 
allow users to define such instances; in which case the 
class and stock instances really can compile even in 
compilers that don't support overlapping instances)


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Mutually-recursive/cyclic module imports

2008-08-15 Thread Isaac Dupree
Haskell-98 specifies that module import cycles work 
automatically with cross-module type inference.


It has some weird interactions with defaulting and the 
monomorphism restriction.  In Haskell-prime we're planning 
on removing artificial monomorphism, but defaulting will 
still be necessary (and can still be set differently per 
module).


Only JHC fully implements the recursive module imports of 
Haskell-98.
GHC and NYhc each have their own proprietary "boot-files" 
with slightly odd semantics to allow this to work (albeit 
the syntax is simple enough)

Hugs doesn't support it at all.

I propose we simplify things and lay down some rules, 
without having to invent explicit module-interface 
signatures.  Then I wouldn't complain(:-)) that GHC doesn't 
have reasonable support for cyclic modules [1][2]. 
(Compiler writers will have to give feedback how plausible 
this is :-) -- I think GHC and NYhc "should" be able to 
adapt their boot-interface-file mechanisms to the scheme I'm 
proposing..


(This is really more of a sketch than a complete proposal at 
this stage.)


In particular, I propose an amount of annotation in a module 
that *shall* make it compile.  Compilers are free to accept 
code for other reasons (e.g. .hs-boot files, or some 
official module interfaces).  These first proposals are 
clean-ups that reflect how ridiculous people think the 
current standard's module interface semantics are compared 
to most languages.  Also they make cross-module type 
inference unnecessary, eliminating the defaulting problem.


namespace level: Haskell98 says that what a module exports 
is determined by the smallest fix-point of what is possible. 
 I can't see a practical use for this behavior, which is 
easily confusing.  I think that exports that depend on the 
result of a fix-point should be rejected.  It can be useful 
in module A to import a few types/functions explicitly from 
a module B that then goes on to export the whole of module A 
though.


type level: Inside any given SCC (loop) of modules, any 
function imported from another member of the SCC normally 
shall have an explicit type signature in the module that 
exports it.  (This doesn't seem a great burden, since 
type-signature for top-level functions/values are considered 
good practice anyway.  Can anyone think of a use-case where 
cross-module type inference would be particularly useful?)


Exception:  imports may be given the {-# SOURCE #-} pragma. 
 This fulfills two purposes:
(1) It is a hint to a compiler that compiles modules 
separately that the current module should be compiled before 
the module being imported with {-# SOURCE #-}.  Obviously, 
this can make optimization worse, since it's likely that 
SOURCE-imported functions won't be strictness-analyzed or 
inlined or anything; but that's the .hs-boot situation 
already.  (And in principle even a compiler that likes 
separate compilation could break individual functions down 
into dependency order to compile them, adding another 
tradeoff point...)
(2) If SOURCE pragmas "break the loop", then only functions 
that are actually imported with SOURCE must be given type 
signatures, even if module B then goes on to import module A 
wholesale: example:

module A where {import {-#SOURCE#-} B (bf); ...}
module B (module A, module B) where {import A; bf :: ...; ...}

Since defining data types in logical places is an important 
use of cyclic imports, I propose not to require any extra 
annotation for them; the compiler will have to chase them 
down and understand them in loops (how else to do it?).
However, there are some particular things to keep in mind 
regarding potential recompilation:

(with a bit of a GHC bias)
Changing any orphan instances in an SCC will force the whole 
thing to recompile (but what pluckiness, putting orphan 
instances *there*!)
If a data type or newtype is imported without its 
constructors, then the RHS changing doesn't really force a 
recompile.
I imagine this could work in GHC by, for each SOURCE import, 
storing the MD5 of the imported interface.  Then when 
checking if you seriously have to recompile module A, you 
don't have to if none of those MD5s have changed and none of 
the non-SOURCE-imported modules' interface MD5s have either. 
 In module cycles that aren't explicitly broken by SOURCEs, 
GHC (or any compiler) should just insert an implicit SOURCE 
for *all* cyclic imports (and possibly emit a warning) 
(unless the compiler wants to guess which SOURCES are better 
for optimization?).  Presumably compilers that can do 
separate as well as non-separate compilation could take an 
optimization flag that tells them to compile cycles together 
as one piece rather than obeying the SOURCES for 
recompilation efficiency.


so what does the compiler have to look at in a 
SOURCE-imported modules?


In the case of the proposed SOURCE imports without hs-boot 
files, GHC would move from calculating one interface(md5) 
per module (or two interfaces in the case of .h

Re: Proposal: change to qualified operator syntax

2008-07-11 Thread Isaac Dupree

Simon Marlow wrote:

Dan Weston wrote:
Would it not be cleaner just to disallow infix notation of qualified 
operators altogether? It is clear enough to use "import qualified" or 
let or where clauses containing prefix notation to identify a 
qualified operator with an unqualified one:


UGLY:

m `Prelude.(>>=)` a
  `Prelude.(>>=)` b
  `Prelude.(>>=)` c


CLEAR:

m >>= a >>= b >>= c
  where (>>=) = Prelude.(>>=)

[Personally, I prefer where to let for such purely syntactic details].


I did consider doing that, and it is certainly an option.  The reasons I 
chose to allow the infix forms are:


 - if you add an import and introduce a name clash, then you want
   to resolve clashes by just modifying the names, not by
   refactoring code.  The trick from your example above works,
   but it requires that all instances of (>>=) are
   in scope qualified, otherwise you get a shadowing warning.

 - it's cheap in terms of grammar and implementation.


Also, I just had a dream about this last night... The other advantage is 
that `Prelude.(>>=)` has the same infix precedence as the imported 
operator (right?), whereas if you want the same for your local synonym 
then you'll have to explicitly give the synonym an appropriate e.g. 
infixl 1 >>= in the where statement.


Fortunately I like the proposal, (1) Have any implementations 
implemented it yet?


(2) as for (`p`), (`Prelude.(>>=)`) not being allowed (even though `` 
sections are, and parenthesized ops-names like (+) are) : I think we can 
make this less of an issue by giving a decent error message for it 
rather than "parse error on input `)'" (e.g. "`(`...`)' isn't allowed 
because it's equivalent to `...'")


Do (1) or (2) have/need GHC trac tickets now?

-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: PROPOSAL: Make Applicative a superclass of Monad

2008-06-30 Thread Isaac Dupree

Ashley Yakeley wrote:

For example:

class Functor f => Applicative f where
  return :: a -> f a
  ap :: f (a -> b) -> f a -> f b
  (>>) :: f a -> f b -> f b
  (>>) = liftA2 (const id)


for backwards compatibility of everyone who *uses* Applicative, (and 
arguably it is a less ugly notation,) :


(<*>) = ap
(and  pure = return)

I'm not sure, is the word "ap" even as well known as "<*>" right now?  I 
wonder which one we'd prefer to use in Applicative?



class Applicative m => Monad m where
  (>>=) :: m a -> (a -> m b) -> m b
  fail :: String -> m a
  fail s = error s


I want to add to this Applicative=>Monad class:

join :: m (m a) -> m a
join mm = mm >>= id
m >>= f = join (fmap f m)

What do others think about that?


(P.S. And I guess this hierarchy change is quite independent of the 
difficult task of removing "fail" from Monad, so I won't discuss that 
here/now)


-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Consistency of reserved operators and bang patterns

2007-09-07 Thread Isaac Dupree

Isaac Dupree wrote:

Twan van Laarhoven wrote:
Oh, and while we are at it, I think (:) should also be removed as a 
reservedop, there is no reason for it to be on that list.


Backwards compatibility requires that it be implicitly imported from 
Prelude even in a module that does "import Prelude ( )" (although Hugs 
is already broken in this regard).


In particular, Haskell-98 bans

import Prelude ( (:) )

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Consistency of reserved operators and bang patterns

2007-09-07 Thread Isaac Dupree

Twan van Laarhoven wrote:
The bang pattern proposal [1] still allows (!) to be used as an 
operator. I think there should be no difference in this regard between ! 
and ~, since they are used in exactly the same location.


In my opinion the best thing would be to allow (~) and (@) as operators. 
With the same restriction on definition as (!), i.e. they must be 
defined in function style, not as an operator.


The change to the syntax would be to remove @ and ~ from the reserved 
operators list [2],

  reservedop -> .. | : | :: | = | \ | | | <- | -> | @ | ~ | =>
making it
  reservedop -> .. | : | :: | = | \ | | | <- | -> | =>


I agree - it confused me in the past that I couldn't define (@) or (~) 
operators.  Bang-pattern syntax being active will still change the 
meaning of


x ! y = z

of course.

Oh, and while we are at it, I think (:) should also be removed as a 
reservedop, there is no reason for it to be on that list.


Backwards compatibility requires that it be implicitly imported from 
Prelude even in a module that does "import Prelude ( )" (although Hugs 
is already broken in this regard).  And that makes it fairly useless as 
a non-reserved symbol.  If not for that issue, I agree.


Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Qualified identifiers opinion

2007-08-20 Thread Isaac Dupree

Simon Marlow wrote:
I believe the solution we adopted for GHC 6.8.1 (and I proposed for 
Haskell') strikes the right balance.


M.where is lexed as an identifier.  This doesn't require adding any 
exceptions or corner cases to either the implementation or the 
specification of the grammar.  It is much easier to implement than the 
existing Haskell 98 rule (I deleted 30 lines of code from GHC's lexer to 
implement it).  It's easy to understand.  It removes an opportunity for 
obfuscation.  It must be the right thing!


Now I've found the h'-wiki page
http://hackage.haskell.org/trac/haskell-prime/wiki/QualifiedIdentifiers

I _think_ the change to lexical syntax on that page is the one Simon 
mentions? and is also the same as what I am supporting?


(I am terribly confused about "Foo.f = " though, since I thought I 
_used_ some code that qualified its definitions that way, and thought it 
was odd. Maybe it was just referring to the things it defined by e.g. 
Foo.f (without importing itself), and I was confused, and further 
confused that definitions then COULDN'T be qualified that way? oh dear...)


Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Qualified identifiers opinion

2007-08-18 Thread Isaac Dupree

Christian Maeder wrote:
| 3. I'm against qualified identifiers, with the unqualified part being a
| keyword like "Foo.where". (The choice of qualification should be left to
| the user, usually one is not forced to used qualified names.)

Okay, here's a thought experiment... one may follow along, and agree or 
not as one likes (I'm not sure how much I agree with it myself, though 
it might be an interesting way to structure a compiler)



> {-# LANGUAGE ForeignFunctionInterface #-}
> module Foo where

Suppose all modules have an implicit, unavoidable

> import ":SpecialSyntax" (module, where, let, [], -- ...
>  , foreign --because that extension is enabled
>   )

Now let's import some imaginary already-existing modules that use "keywords"

> import A (foreign)
> import B (mdo)

This turns up a problem already: explicitly naming things in an import 
or export list might not work unambiguously, because keywords are 
sometimes used to mean special things there. Going on... maybe we 
imported the whole modules.


According to standard Haskell import rules, there is no conflict until 
the ambiguous word is used.


Either "f" here works fine, because ":SpecialSyntax" in this module did 
not import "mdo":


> f = mdo
> f = B.mdo

Whereas the possibilities with "foreign"...

> g = foreign --error, ambiguous
> foreign import ccall  --error, ambiguous
> g = A.foreign --okay, unambiguous
> ":SpecialSyntax".foreign import ccall  -- can't write in Haskell!

Now, if we want to avoid the understandably undesirable matter of 
imports interfering with keywords (after all, keywords can appear before 
the import list is finished, such as "module" "where" and "import"), we 
could tweak the import-conflict rules for this special case. In this 
module where "foreign" is imported from ":SpecialSyntax", the mere 
phrase "import A" could raise an error, as if all imported syntax were 
used (unqualified, as always) in the module.  Whereas, "import qualified 
A" would not.  (and what about "import A hiding ..."?)





By the way, we are lucky that pragmas have their own namespace {-# NAME 
arguments #-}.  But as I mentioned, we lack a namespace for extensions 
that have a semantic impact on the annotated code.  Certain 
preprocessors invent things like {-! !-} ... or add template-haskell 
syntax, or some arbitrary other keywords syntax like "proc...do"... or 
even steal large portions of existing comment syntax (Haddock, which 
isn't even a semantic impact on the code!)


BTW #2: The simpler and less variable the lexer is, the easier it is to 
scan for LANGUAGE pragmas.  That search doesn't need to deal with 
keywords at all. (although it may be popular not to use the usual lexer 
in order to search for those pragmas, I don't know)



Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Qualified identifiers opinion

2007-08-17 Thread Isaac Dupree

Christian Maeder wrote:

Hi Isaac,

just to give you a reply at all, see below. I reply
[EMAIL PROTECTED] since I'm not subscribed to
haskell-prime. And I don't want to subscribe, because I'm more
interested that Haskell becomes more stable (and standard).


Then maybe you can join haskell-prime and provide the energy that rounds 
up all the little fixes and tries to actually produce the thing! 
Drastic changes are not intended to go in.  Haskell' should bring more 
stability and standardness (as long as it doesn't diverge too much from 
Haskell98, which would decrease stability and standardness)



So here is
my opinion:

1. The lexer should recognize keywords.

2. I would not mind if Haskel98 rejected all keywords that are also
rejected by extensions, so that the lexer is extension independent.
(Starting with Haskell98, removing conflicting identifiers as soon as I
switch on valuable extensions does not make sense.)


Trouble is, extensions are just that: extensions, and more with their 
own keywords may be added in the future! unless we want an 
internet-standard-like "x-keywordname" - but that doesn't solve this 
problem: standardized new keyword names clogging up the general 
namespace, as long as they don't have a symbol (like Objective-C has 
@class, @whatever...).



3. I'm against qualified identifiers, with the unqualified part being a
keyword like "Foo.where". (The choice of qualification should be left to
the user, usually one is not forced to used qualified names.)

4. However, "Foo.where" should always be rejected and not changed to
"Foo.wher e"! (Longest matching, aka "maximal munch", must not consider
keywords!)

(see end of: http://www.haskell.org/onlinelibrary/lexemes.html#sect2.4)

I would not mind if a name "F. " is plainly rejected. It only makes
sense, when a data constructor is the first argument of the composition
operator "(.)"


I wouldn't mind if that was banned either. That case needs to be 
considered for implementing my lexer. In fact, banning that and 
qualified keywords allows the lexer proper not to know keywords and 
nevertheless ban qualified keywords (a bit of a hack).  But... while I 
wouldn't _recommend_ using qualified keywords, and compilers could give 
a warning even for haskell98 code that uses known 
extension-keyword-names at all, it seems best to me, to _allow_ them, in 
the interests of allowing code to remain fairly stable with the 
potential of extensions being developed (especially thinking of the 
BangPatterns that had an effect on existing definitions of (!) ).




Maybe "." and "$" as operators should require white spaces on both
sides, since "$(" also indicates template haskell.


but it's so convenient as it is... plenty of code uses (.) without 
spaces, and I don't like the way template-haskell steals "$(" and "$id" 
(from the point of view of a person who has never tried to use 
template-haskell).


I think haskell is more stable by allowing existing code e.g.
(f = fix (\rec ->  rec ) --'rec' is arrow-sugar keyword
than banning some bunch of new keyword names.  And allowing interim 
interoperability with old code that exports those names, like the 
unfortunate (!) or (.) (I know, those aren't exactly ever keywords/syms) 
seems like a good idea when it removes complexity rather than adding it. 
   I don't want Haskell98 to become a language that has difficulty 
interoperating with libraries and using-applications that use newer 
Haskells.


from other comments:

What's wrong with the status quo?  Our current lexical rules *seem*
complicated to newbies, but just like everything else in Haskell it carries
a deep simplicity; having only one rule (maximal-munch) gives a certain
elegance that the proposals all lack.

I'd hate to see Haskell become complex all the way down just to fix a few
corner cases; I see this pattern of simplicity degerating through
well-intentioned attempts to fix things all over the language...


I agree with Stefan, for the reasons he stated and for one additional
reason:  There would be a multitude of unintended behavior changes.


Well, GHC doesn't implement aforementioned maximal-munch re: keywords. I 
don't think it's good (compositional?) design for the set of keywords to 
be part of the lexer rather than a pass after it, when keywords behave 
so similarly to other words, and also when there are non-keywords like 
"as" and "qualified" and sometimes "forall" (whose non-reserved status I 
support).

lex --> keywords --> layout --> parse
Besides, I don't think any of the above proposals will generate behavior 
changes in real code. Some cause more errors (adding more keywords; 
banning adjacent '.' or '$') and some allow a few more things that were 
errors before.

f = Just.let x = x in id  --a.k.a. f = Just
would break in my proposal, but it also breaks according to Haskell98...


Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/list

Qualified identifiers opinion

2007-08-15 Thread Isaac Dupree


Especially after writing a partial lexer for Haskell, I opine that this 
should be all legal:




module Foo where

--in case you didn't know, this is legal syntax:
Foo.f = undefined

Foo.mdo = undefined
Foo.where = undefined
x Foo.! y = undefined
x Foo... y = undefined --remember ".." is reserved id, e.g. [2..5]


{-# LANGUAGE RecursiveDo, BangPatterns #-} module Bar where
import Foo
hello !x = mdo { y <- Foo.mdo Foo... ({-Foo.-}f x y); return y }

{- Haskell 98 -} module Baz where
import Foo
goodbye x = x ! 12



(Foo.where) lexing as (Foo.wher e) or (Foo . where) does not make me 
happy.  (being a lexer error is a little less bad...)  Especially not 
when the set of keywords is flexible.  I don't see any good reason to 
forbid declaring keywords as identifiers/operators, since it is 
completely unambiguous, removes an extension-dependence from the lexer 
and simplifies it (at least the mental lexer); Also I hear that the 
Haskell98 lexing is (Foo.wher e), which I'm sure no one relies on...


Well, that's my humble opinion on what should go into Haskell' on this 
issue.


Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


default for quotRem in terms of divMod?

2007-08-08 Thread Isaac Dupree
(something very odd was going on with that reply-to! trying to send this 
in a sensible way...)


Christian Maeder wrote:
>
>
> Isaac Dupree wrote:
>> In class Integral, divMod has a default in terms of quotRem.
>> (quot,rem,div,mod all have defaults as the both-function they're part
>> of.)  I'm sure divMod is more natural than quotRem to implement for some
>> types... so why doesn't quotRem have a default in terms of divMod? it
>> has no default! Then the "minimal to implement" will change from
>> (toInteger and quotRem) to (toInteger and (quotRem or divMod)).
>>
>> Isaac
>
> while I don't care if quotRem or divMod should be implemented. I oppose
> to give both default implementations in terms of the other.
>
> Already for the class Eq either == or /= must be defined, with the
> unpleasant effect that an empty instance like:
>
>   instance Eq T
>
> leads to a loop (when == or /= is called on elements of type T).
>
> The empty instance does not even raise a warning about unimplemented
> methods (since the default definition is used).
>
> I'd rather prefer to remove /= as method of Eq.

I second that this is a problem.  However, I think that compilers are 
perfectly justified in replacing nontermination with a runtime 
error(i.e. exception - the generalization of a call to "error"), and 
also generating warnings when it's unimplemented class methods, and we 
could solve it that way.  I don't know how hard it is to detect 
automatically that two default definitions produce a useless loop... if 
that's too hard, possibly some explicit annotation in the code could be 
done (Already there is an informal convention of saying exactly what 
methods must be implemented, so that should be formalize-able...).  I 
_think_ that for Eq and my Integral proposal, GHC's strictness analyser 
is quite up to the task.


In general, what do we think about replacing certain nontermination with 
an exception, particularly by the compiler's detection? GHC already does 
it very unreliably with the <> low-level thunk 
something-or-other  It might be interesting to see how many spurious 
warnings would be generated, if GHC also generated a warning for each - 
i.e. is nontermination ever intentionally used as a _|_ normally?


Isaac

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


default for quotRem in terms of divMod?

2007-08-07 Thread Isaac Dupree
In class Integral, divMod has a default in terms of quotRem. 
(quot,rem,div,mod all have defaults as the both-function they're part 
of.)  I'm sure divMod is more natural than quotRem to implement for some 
types... so why doesn't quotRem have a default in terms of divMod? it 
has no default! Then the "minimal to implement" will change from 
(toInteger and quotRem) to (toInteger and (quotRem or divMod)).


Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Make it possible to evaluate monadic actions when assigning record fields

2007-07-14 Thread Isaac Dupree

apfelmus wrote:

I see, the dreaded name-supply problem. Well, it just seems that monads
are not quite the right abstraction for that one, right? (Despite that
monads make up a good implementation). In other words, my opinion is
that it's not the monadic code that is over-linearized but the code that
is over-monadized.

The main property of a "monad" for name-supply is of course

 f >> g  =  g >> f

modulo alpha-conversion. Although we have to specify an order, it's
completely immaterial. There _has_ to be a better abstraction than
"monad" to capture this!


I agree completely!

It would be nice if the compiler could choose any order (or none at all, 
depending on implementation?) at its discretion.


If serialization(where the gaps are filled with actual strings as names) 
produces different results depending on the order (similar to 
name-supply *monad*: not(f >> g  =  g >> f) in a too-significant way), 
we have a purity violation if the order is not well-defined.  Big problem.


So we need to make sure they are used in an abstracted enough manner - 
perhaps only an instance of Eq, to make sharing/uniqueness/identity 
detectable, no more.  In dependently-typed languages I think we could 
have data structures that were fast but provably didn't depend in their 
operation on the material of ordering, for example, for lookup. 
Association-lists only need Eq but can be a little slow...  So with this 
technique in Haskell, Frisby for example would examine the infinite tree 
starting at the returned root, and choose an order for internal use 
based on the shape of the tree (which represents a *cyclic* graph) -- it 
would be unable to use ordering provided by name-supply 
sequencing(monad).  Which is just fine for it.  (except for being 
O((number of rules)^2) to construct a parser, using association lists, I 
think.)  Further abstraction could be added with a primitive 
UniqueNameMap of sorts, similar to (Map UniqueName a)... not enjoyable, 
so it might manage to be implemented in terms of some unsafe operations 
:-/. I hope my pessimism here is proved wrong :)


Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Make it possible to evaluate monadic actions when assigning record fields

2007-07-10 Thread Isaac Dupree

Adde wrote:

 tmp <- foo
 return Bar {
   barFoo = tmp
 }


There is a feature being worked on in GHC HEAD that would let you do

 do
  tmp <- foo
  return Bar{..}

which captures fields from everything of the same name that's in scope. 
 I think this would also satisfy your desire.


(also, the liftM approach doesn't let you choose the order of the 
monadic actions.)


Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


monomorphism restriction confusions

2007-07-09 Thread Isaac Dupree
Haskell98's monomorphism restriction is too confusing!  See my mistaken 
GHC bug report . 
Whether a binding is monomorphic depends not just on syntax, but on the 
amount of type constraints on the right-hand side of a binding - and I 
didn't realize, because this issue usually doesn't come up (usually 
types are already monomorphic or are at least typeclass qualified, or at 
least don't have to be monomorphic to prevent a type error).  Although 
this finally convinces me that we should dump H98 m-r in favor of the 
very straightforward "monomorphic pattern bindings", if we don't, at 
least I believe that Report Section 4.5.5, Rule 1 needs a reword.  It 
uses "(un) restricted" to mean "restricted (to be monomorphic) IN SOME 
CASES".  Maybe a word like "suspicious" would be less misleading than 
"restricted" there?


Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] Haskell's prefix exprs

2007-07-09 Thread Isaac Dupree

Stefan O'Rear wrote:

On Mon, Jul 09, 2007 at 03:55:52PM +0200, Christian Maeder wrote:

Hi,

I would like haskell to accept the following (currently illegal)
expressions as syntactically valid prefix applications:

f = id \ _ -> []
g = id let x = [] in x
h = id case [] of [] -> []
i = id do []
j = id if True then [] else []


I agree.  The only (minor) concern I have is: that syntax is hard to 
read (by humans) without syntax-hilighting of keywords.


Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: default fixity for `quotRem`, `divMod` ??

2007-06-19 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Bulat Ziganshin wrote:
> Hello Isaac,
> 
> Monday, June 18, 2007, 9:20:29 PM, you wrote:
> 
>> I was just bitten in ghci by `divMod` being the default infixl 9 instead
>> of the same as `div` and `mod`.
> 
> one of my hard-to-find bugs was exactly in this area: i wrote
> something like  x `div` y+1  instead of  x `div` (y+1)
> 
> so, based on practical experience, i have opposite proposal: give to
> all `op` lowest precedence (a bit higher than of '$') because it
> complies to its visual effect

I wonder how much code this would break.  Maybe we could have a warning
for anything that relies on the default fixity anyway (for things like
x `div` y+1  , that don't give a type error anyway).  And it would be
nice if even e.g. `div` had the same precedence as all other `op`s in
existence -- but I'm pretty sure that would break a bunch of code.

Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGeDX1HgcxvIWYTTURAqA7AJ9WqCXZy2X/LhV18o6dpENaYx0k6gCgqXeG
yAubnfVm3LBBj3l1/z/MZ18=
=G8Uf
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: type signatures in export lists

2007-06-18 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Arie Peterson wrote:
> Isaac Dupree wrote:
> 
>> One big question: can their presence have any effect?
>> * on the module doing the exporting (conflict with the presence of
>> in-module type-signature for the same thing; type restriction in-module;
>> monomorphism-restriction-lifting or defaulting-removal of the named thing)
>> * on modules importing this one (can a module re-export something,
>> giving it a more restrictive type-signature?)
> 
> Letting an export-list type signature be equivalent to a normal one has
> the benefit of being simple (to explain and implement). Exporting a
> function with a less polymorphic type than its in-module type seems a bit
> awkward: you would have two different functions (one internal and one
> exported) with the same name.

Agreed.

> 
> If export-list and normal type signatures would be equivalent, the only
> benefit of allowing the former (compared to writing it in a comment) would
> be that the compiler can check it for consistency. Right?

Currently, it is not allowed to provide duplicate equivalent
type-signatures for something. Neither is it allowed to put top-level
type signatures that describe a function you import with the intent of
exporting it; even so, you should be able to specify your module's
interface however the exported functions are provided.

How about this: the presence of an export-list type signature means that
you will get a compile error if the type provided is not equivalent to
the type the exported object has anyway, defined as:
"((exportedObject :: type in export list) :: actual type)" typechecks.
(where "actual type" is after all effects of monomorphism restriction,
defaulting, etc., have taken effect)


> 
>> The type doesn't need to be exported. (which is more likely if it's just
>> a type synonym than a new type.)  So what scope are the names in the
>> export-list-type-signature drawn from?  It would be odd if a type
>> signature couldn't be given because some names weren't exported; but it
>> would be odd if a module-user looking at the interface saw some types
>> that weren't defined anywhere visible.
> 
> As a user of the module, I would argue that all types (including synonyms)
> appearing in the signature of an exported function should be exported as
> well. Not sure if this needs to be enforced, though.

Probably shouldn't be enforced - compilers could be made to give a
warning for such bad behavior though.  Luckily, "just outside" a module
export list is purely a subset of in the module, so ... wait.

module Foo1 where
data Foo = CFoo1
module Foo2 where
data Foo = CFoo2
module Ambig (Foo2.Foo(), foo) where
import Foo1
import Foo2
foo = CFoo2

Not in the presence of ambiguity. Here, to be precise, we have
(foo :: Foo2.Foo).  Since the export list allows and requires Foo to be
qualified when Foo is exported, I think it's fair to require that a type
signature for 'foo' in the export list qualifies the Foo type it has
with Foo2, even though from the module-user's point of view, there is
only one Foo.


Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGdsi7HgcxvIWYTTURAmqPAJ4lIrpsSK7bJWXYMxj/t6SOQkq+XwCgxtu2
9NHc1sG8qMtB6LmzyAfi9FY=
=xlwH
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


default fixity for `quotRem`, `divMod` ??

2007-06-18 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I was just bitten in ghci by `divMod` being the default infixl 9 instead
of the same as `div` and `mod`.  Sure enough, the standard prelude
doesn't specify a fixity for `quotRem` and `divMod` even though `quot`,
`rem`, `div`, and `mod` are all infixl 7.  I propose that `quotRem` and
`divMod` also become infixl 7.

Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGdr7dHgcxvIWYTTURAu07AKCb0RAI343lnRlH1FgI1rMy0dx1FQCfcnsV
g6HUB5vDVbk9LPGi51WpY+o=
=iESL
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


type signatures in export lists

2007-06-18 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

They're not implemented, but they won't be until we decide exactly what
they mean - it's not simple.

(at least the syntax is obvious:
http://hackage.haskell.org/trac/haskell-prime/wiki/PermitSignaturesInExports
)

One big question: can their presence have any effect?
* on the module doing the exporting (conflict with the presence of
in-module type-signature for the same thing; type restriction in-module;
monomorphism-restriction-lifting or defaulting-removal of the named thing)
* on modules importing this one (can a module re-export something,
giving it a more restrictive type-signature?)


Case study 1:

module Foo1 (Foo(..), foo :: Foo) where

data Foo = CFoo

foo :: Foo
foo = CFoo


If we interpret it as an in-module type signature, it will be a problem
because there already is one for foo. (unless we should relax the rules
for uniqueness of type signatures anyway?)


Case study 2:

module Foo2 (foo :: Foo) where

data Foo = CFoo

foo :: Foo
foo = CFoo


The type doesn't need to be exported. (which is more likely if it's just
a type synonym than a new type.)  So what scope are the names in the
export-list-type-signature drawn from?  It would be odd if a type
signature couldn't be given because some names weren't exported; but it
would be odd if a module-user looking at the interface saw some types
that weren't defined anywhere visible.

This suggests that wildcards in type signatures could be helpful for this:
module Foo3 (foo :: Int -> Foo -> Bool)
versus
module Foo4 (foo :: Int -> _ -> Bool)


Open for discussion.

Isaac



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGdmneHgcxvIWYTTURAlTpAJ425ZL8WPzkAeIxRbgGSOQFCKtrnwCeIG0P
aawB6GDwSfiRNM20MErBj8E=
=KCKj
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: What separates lines in Haskell code?

2007-06-17 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Antti-Juhani Kaijanaho wrote:
> On Thu, Jun 14, 2007 at 09:11:12AM -0400, Isaac Dupree wrote:
>> In the report, under the layout rule (section 9.3), "The characters
>> newline, return, linefeed, and formfeed, all start a new line."  (Which
>> four characters are those? from http://en.wikipedia.org/wiki/Linefeed ,
>> I'm guessing "LF: Line Feed, U+000A", "CR: Carriage Return, U+000D",
>> "FF: Form Feed, U+000C", and what's the fourth one?  Newline usually
>> refers to '\n', which is LF, but linefeed has a direct name
>> correspondence to that also!)
> 
> The H98 lexical syntax defines newline as
>   newline  ->  return linefeed | return | linefeed | formfeed
> 
> It could, I suppose, also refer to the Unicode character U+2028 LINE 
> SEPARATOR,
> but then probably U+2029 PARAGRAPH SEPARATOR ought to be included as well.
> 
> There are, BTW, Unicode guidelines for newline usage in section 5.8 of the
> Unicode 5.0 online edition.

http://www.unicode.org/versions/Unicode5.0.0/ch05.pdf#G10213

Alright, I think the comment in the layout-rule section should not try
to enumerate newlines, but rather should refer back to the lexical
definition of 'newline'.

As per the above Unicode guideline, the existing set of characters that
Haskell98 accepts as newlines, and a section of the Unicode regex
guidelines <http://unicode.org/reports/tr18/>, I propose all should be
accepted as line separators:

\u000A | \u000B | \u000C | \u000D | \u0085 | \u2028 | \u2029 | \u000D\u000A

i.e. (not in the same order) CR, LF, CRLF, NEL, VT, FF, LS, PS.

Unfortunately that makes it a little hard to process; maybe translate
all into '\n' before doing any processing (such as unliteration).


Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGdXETHgcxvIWYTTURApE8AJsEdw8zUrri+EzXfa+EhlyC1UT2TACdHjgp
RjtYbkXTMFadsavlzhCHDJ0=
=Nbl0
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


"qvars"? pragma syntax generally?

2007-06-16 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

For INLINE and NOINLINE the report uses "qvars"
http://haskell.org/onlinereport/pragmas.html
which is not defined in http://haskell.org/onlinereport/syntax-iso.html
(although its meaning is obvious since var, qvar and vars are all defined).

Is it permissible for compilers to die on pragma syntax they don't
personally like? For example GHC chokes on {-# INLINE Ma-in.ma-in #-} at
top level, but it would be worse with non-standardized pragmas. (the
compiler giving a warning that it understands a pragma of that name, but
not the syntax given, would be best.)  "the pragma should be ignored if
an implementation is not prepared to handle it."

WHAT IS pragma syntax?? The report doesn't say how whitespace is
handled.  syntax-iso doesn't mention pragmas; in its opinion everything
from {-# hi-} to {-#{-##-}#-} to {-# INLINE main #-} is a perfectly good
comment.  [1]  But the report recommends some specific syntaxes.  To all
appearances, it is expected that they follow (guessing from other
Haskell syntax) the form

pragma -> {-#pragmaid(some pragma-specific syntax that is
consistent with the whole thing being ncomment)#-}
pragmaid -> (large|_) {large|_|'}

Still... whitespace? GHC understands as a pragma

{-#

LINE 3 "foo.hs" #-}

but not

{-#
{- nested comment -}
LINE 3 "foo.hs" #-}

; I don't even know about inside or next to the (some pragma-specific
syntax).  Is the inside of pragmas supposed to be lexed somewhat the
same way as the rest of the Haskell file, or "it can vary" because
pragmas can be anything they want... hopefully the convention of
beginning pragmas with a name cannot vary.


[1] (hmm, maybe pragmas should be used to indicate haddock comments -
too late now, and probably too verbose too)

Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGdCuyHgcxvIWYTTURAnljAJ9NLJKW+CTroJ0Vg43bgGWZ3DXJHwCgp+Z3
SAX/cK5PSV7B4TBfwg664xM=
=81g6
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


What separates lines in Haskell code?

2007-06-14 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

In the report, under the layout rule (section 9.3), "The characters
newline, return, linefeed, and formfeed, all start a new line."  (Which
four characters are those? from http://en.wikipedia.org/wiki/Linefeed ,
I'm guessing "LF: Line Feed, U+000A", "CR: Carriage Return, U+000D",
"FF: Form Feed, U+000C", and what's the fourth one?  Newline usually
refers to '\n', which is LF, but linefeed has a direct name
correspondence to that also!)

The literate haskell section 9.4 just talks about lines without being
specific about how they're specified.  My proposed sample implementation
uses Prelude.lines ...

Prelude.lines presumes that lines are separated only by '\n'.

(Of course, for Prelude.unlines to be an inverse operation (which it's
not anyway) there has to be only one character that makes a line-separation)


Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGcT5vHgcxvIWYTTURAowrAJ4rz3/Sc763l8TEharcnWcma5BkBgCfRhAF
XbfCIG8tnym1gZFRZf4KuRo=
=it7M
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Literate Haskell specification

2007-05-29 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Isaac Dupree wrote:
> As I brought up earlier in Haskell-cafe
> http://thread.gmane.org/gmane.comp.lang.haskell.cafe/20026
> , the Haskell98 specification for literate haskell (report section 9.4)
> could use some work, at least clarifications (existing haskell
> implementations differ in some ways) - see that thread for details.
> Since I haven't successfully gotten to writing a concrete revision of
> that section, I thought I'd at least bring the issue to the attention of
> specifically haskell-prime people, as it is an issue that "should
> definitely" be addressed in the Report.  Hopefully there's someone
> around here who might tackle it :)

Like myself :)

See
http://isaac.cedarswampstudios.org/2007/LiterateHaskellPrime/literate.html
for my draft (look in
http://isaac.cedarswampstudios.org/2007/LiterateHaskellPrime/ for source
files).  It needs feedback.

We could make the "not advisable" things be "if you allow them, they are
this way; but you don't need to allow them".  I have provided alternate
wordings for both possibilities in both places that my draft says
something is "not advisable". (is talking about "implementations" the
right way to go about wording it?)


Technical help needed:

There seems to be duplicate sections of the report for this "literate",
even in the source-files?? literate.verb and in syntax-iso.verb.  In
fact in the Haskell 98 revised online report Full Table of Contents, it
is duplicated, in sections 9.4 and 9.6.

What am I doing wrong that makes the layout of my BNF a little funny,
some spaces missing, some lines wrapped where they shouldn't be?

What is this \Haskell{} rather than just Haskell?  Assuming I should use
it... it inserts a space before my commas and periods, ("... Haskell ,
...") which is bad.

Does the
%
% $Header: /home/cvs/root/haskell-report/report/literate.verb,v 1.5
2002/12/02 14:53:30 simonpj Exp $
%
at the beginning of the files still make sense?  Should it be deleted?


Other observation:

The LaTeX-style example doesn't seem very good - e.g., beginning with
\documentstyle, and the lhs2TeX processor, don't go well together.
(apparently beginning with \documentclass instead is necessary to make
the \usepackage{..}s it generates work, and such a heading may not be
necessary at all for that...)


Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGXFMIHgcxvIWYTTURApslAJ4qXnKixhz6aClK/SrJfr/x9odA6wCfYqEm
d6vUpTPpLHSm1ObSQqsxI4A=
=9Za5
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] global variables

2007-05-22 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Isaac Dupree wrote:
> Maybe some sort of ISOLATE, DON'T_OPTIMIZE (but CAF), or
> USED_AS_GLOBAL_VARIABLE pragma instead of just the insufficient NOINLINE
> would be a good first step... if successful it would remove the
> occasional need for -fno-cse for a whole module in GHC, at least.

ISOLATE, DON'T_OPTIMIZE are actually bad names for the whole effect,
which requires persistent CAF semantics.  An implementation that doesn't
make top-level definitions be CAFs, or even one that is willing to
garbage-collect them when memory is tight such that they need
recalculation later, would need a special case for global variables to
make them work.

i.e. I'm not sure if there exists a reasonable pragma while the code
still uses unsafePerformIO.

Hmm

How about

so,
{-# NOINLINE var #-}
var :: IORef Int
var = unsafePerformIO (newIORef 3)

- -->

var :: IORef Int
var = {-# EVALUATE_THIS_TEXT_ONLY_ONCE #-} (unsafePerformIO (newIORef 3))

to capture the desired semantics: text-based uniqueness, no duplication,
no sharing of the IORefs (sharing the pure contents is fine), and no
need to actually evaluate it any times at all. {-#
EVALUATE_THIS_TEXT_ONLY_ONCE #-} is syntactically like a (special)
function.  Clearly it is an impossible demand for polymorphic things, so
the compiler could complain (at least a warning) if the (var :: IORef
Int) line was left off, for example. I guess it would also complain
about non-type(class) argument dependencies too such as (f x =
(unsafePerformIO (newIORef (x::Int))) )...

Food for thought :-)


Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGU02YHgcxvIWYTTURAoCaAKCkDH7Pd7JbNt0TmNig9j7ujiUV9ACZAevI
QOjdmMbrPfVrKBafZshCh7c=
=9/5v
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] global variables

2007-05-20 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Adrian Hey wrote:
> Isaac Dupree wrote:
>> Maybe some sort of ISOLATE, DON'T_OPTIMIZE (but CAF), or
>> USED_AS_GLOBAL_VARIABLE pragma instead of just the insufficient NOINLINE
>> would be a good first step... if successful it would remove the
>> occasional need for -fno-cse for a whole module in GHC, at least.
> 
> I have a hard time trying to understand why anyone would prefer
> this to the simple and clear <- syntax that's been proposed. As
> for the ACIO monad itself, this is utterly trivial and requires
> no language change. It's just a library.
> 
> Maybe the first pragma you propose might have other uses to control
> optimisations, so I'm not totally anti this. But generally I
> dislike pragmas (I always find myself wondering what's wrong
> with the language design that makes the pragma necessary).
> 
> So pragmas that influence optimisation are something I can
> live with. But using pragmas to influence *semantics* really
> is an evil practice IMO and is something that should be
> discouraged, not made an unavoidable necessity.

Indeed.  My rationale:

 - It would get some reliable semantics implemented in GHC (and/or other
compilers hopefully).  Since what we have already is a multi-part hack,
this might be a nontrivial/important piece of work, and should make such
things more reliable.
 - Pragmas (NOINLINE) are already used to influence semantics here.
This idea doesn't introduce anything "worse" than that.  And it doesn't
require that people subscribe to particular syntax, ACIO implementation,
etc.
 - Once implemented, if I understand correctly (do I?), it should make
it easier for non-Simon to try out the hard work of a "real" solution
involving non-pragma-syntax changes, ACIO libraries, or whatever is desired.

Not because I think it's a great solution (nor even deserve to be called
a real "solution" at all), but because nothing is being implemented now,
for whatever reason.  So I'm putting out this idea, in case it's a step
in the right direction that someone is willing to take.

Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGUJlEHgcxvIWYTTURAiGRAJ9ovzlD1Tc/Ce5tbCbYBBGcWLX/9ACfYzc3
a+xC3hQrXB3V9Iq+0vzxnmg=
=EGk7
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] global variables

2007-05-20 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Adrian Hey wrote:
> [cc'ing HPrime]
> 
> Isaac Dupree wrote:
>> The unsafePerformIO hack being used is not very satisfactory given how
>> many optimizations make it difficult to use safely in practice.  This
>> hack is also used many places.  I would be happier if that situation
>> were not true, and I suspect there's something like a consensus on
>> _that_. (maybe not as strong as "_needs_ a solution" in the short-to-mid
>> term future)
> 
> Considering the value that the Haskell community normally places on
> sound semantics, reliance on such an appalling hack seems pretty bad to
> me. If a solution doesn't find it's way into H' then how many more years
> is it going to be with us? It's just embarrassing :-)

Yes, also it places value on REALLY EXTREMELY (excessively?) SOUND
semantics, and on the modularity of the language even more than the
modularity of its uses (or something like that :-)

Maybe some sort of ISOLATE, DON'T_OPTIMIZE (but CAF), or
USED_AS_GLOBAL_VARIABLE pragma instead of just the insufficient NOINLINE
would be a good first step... if successful it would remove the
occasional need for -fno-cse for a whole module in GHC, at least.

Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGUF4yHgcxvIWYTTURAvqWAJ46eFRt5LK1lUwqr2BmHVSrHljxzwCfYGJB
x5ivAFEw5vYKbxTPIg+PrIU=
=0xVK
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Wanted: warning option for usages of unary minus

2007-05-19 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

John Meacham wrote:
> another option would be to only count it as a negative if there is a
> non-identifier character preceeding it. A little ugly. but still better
> than the current situation IMHO.

I think Ghc's lexer "Alex" can do this although this functionality is
not used anywhere else... it seems a little out of character.  I don't
really like that "(3-2)-1" would be parsed differently because it's a
parenthesized expression; consider "3^2-1" vs. "(3^2)-1" ...

Isaac

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGTwMCHgcxvIWYTTURAkzHAKCdekuA6rUw4QcnIV3Qq9WJ8ZkljQCfTH5G
c0jDDrAGLtBVZ4WVRdTDJu8=
=1BDf
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Wanted: warning option for usages of unary minus

2007-05-18 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Taral wrote:
> On 5/17/07, Joseph H. Fasel <[EMAIL PROTECTED]> wrote:
>> *Sigh*  The problems with unary minus were discussed in the dim mists of
>> time before we published the first Haskell report.  We considered then
>> using a separate symbol for unary negation (as does APL, for example),
>> but (IIRC) this was regarded as unfriendly to Fortran programmers.
> 
> [breaking cc list]
> 
> Would this kind of thing be eligible for Haskell'? I never had a
> problem with _1 in APL-type languages... and I think it's best to be
> very clear about intent.
> 

Haskell' is "supposed to" be a conservative standard describing
mostly-already-implemented features. (of course that is why to implement
features like this efficiently if there is a good reason for them)

underscore seems like a bad candidate in haskell because:
_1 is presently a lowercase haskell identifier
some people want mid-number underscores 1_048_576
...

Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGTXVBHgcxvIWYTTURAv+HAJ4mCqpXLLUEHaYeHrw8l6lx3eBr4QCgnTNl
+g5Rllpuk/8s6p+1hTxi4Ew=
=IvPt
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Wanted: warning option for usages of unary minus

2007-05-17 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Twan van Laarhoven wrote:
> There is one other alternative for parsing:
>"-" is a unary minus if and only if it is
>a) preceded by whitespace or one of "[({;,", and
>b) not followed by whitespace.
> 
> So:
>   x - 1 ==(-) x 1
>   x-1   ==(-) x 1
>   x -1  ==x (negate 1)
>   x -(1)==x (negate 1)
>   x (-1)==x (negate 1)
>   x (- 1)   ==x (\y -> y - 1)
> 
> Just an idea.

Indeed, and in some language syntax designs it would certainly be a good
system for prefix operators.

Existing parsers may have some difficulty. How about
> {-comment-}-1
?
how about
> WeirdNumber{value=2,weird=True}-1
?

Although likely to make any actual code work, it seems a bit complicated
from the mindset of current Haskell parsing/lexing.

"(b) not followed by whitespace." can be replaced by
(b) followed by a digit
if desired not to allow it for negating arbitrary expressions.


Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGTJRgHgcxvIWYTTURAqpMAJ9rpCFwzOG/ZSF0qpM/hD/mFKrQ1wCfSRCK
2nKiBzRs/8thmgrdBT+SowA=
=lFCl
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Wanted: warning option for usages of unary minus

2007-05-17 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I wrote:
>> negative :: Num a => Integer -> a
>> negative a = fromInteger (negate a)

Oops, I forgot Rational literals, they make things a little more
complicated :(

Isaac

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGTJKxHgcxvIWYTTURAtGMAJ9oetioh1rfTF1o+bqCWqWxG/LSiwCgghq9
pOBHdfUp625ll1lpTbW0X+w=
=X0oP
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Wanted: warning option for usages of unary minus

2007-05-17 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Iavor Diatchki wrote:
> Hello,
> 
> I agree with Simon on this one: "x-1" should parse as expected (i.e.,
> the infix operator "-" applied to two arguments "x" and "1"). Having
> this result in a type error would be confusing to both beginners and
> working Haskell programmers.
> 
> I think that if we want to change anything at all, we should simply
> eliminate the unary negation operator without changing the lexer
> (i.e., we would have only positive literals).  Then we would have to
> be explicit about what is currently happening implicitly in
> Haskell98---we would write "negate 1" instead of "-1".
> 
> However, I don't thinks that this change is justified---as far as I
> can see, the only benefit is that it simplifies the parser.  However,
> the change is not backward compatible and may break some programs.

Simplifies the _mental_ parser, much more important than the compilers'
parsers which are already implemented.

Here is what I am thinking to do:

In my own code, since there seems to be so much difficulty with the
matter, don't use (-X) to mean negative for any kind of X whatsoever.
For this I want a warning for ALL usages of the unary minus operator.
I'll define a function for my negative literals that calls fromInteger
and negate in the order I would prefer to my sensibilities, which is
actually different from the order that the Report specifies for (-x) :

> {-# INLINE negative #-}
> negative :: Num a => Integer -> a
> negative a = fromInteger (negate a)

I might feel like having a parallel

> {-# INLINE positive #-}
> positive :: Num a => Integer -> a
> positive a = fromInteger a

(e.g. C has a unary + operator... and "positive" even has the same
number-of-characters length as "negative"!).


For GHC's unboxed negative literals I think I will still change the
lexer/parser since the current way it's done is rather confusing anyway
(as previously described)


I don't know what else is worth implementing... NOT an option to turn
off parsing of unary minus, since warnings are good and it would just
create more incompatibility.  John Meacham, since you seem to be
interested, what are your thoughts now?  Advice on flag names - or any
other discussion! is anyone interested in having something, say so? -
would be appreciated.


Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGTDBQHgcxvIWYTTURAt14AJ9+Avd3FJ54+f0eNzUBFM7tOPy5TgCfRys8
usEFDx9uNH2UjUHBbG9kyGs=
=M3CU
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Literate Haskell specification

2007-04-24 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Vivian McPhail wrote:
> Dear Committee
> 
> If I recall correctly, in the tex-style literate haskell specification,
> code
> is delimited by a
> 
> \begin{code}
> \end{code}
> 
> This does not allow for multilanguage support in a single source file.  It
> would be nice to have a single document in which we could mix English,
> Haskell, and, for example, Coq proofs.
> 
> To this end, would it make more sense to delimit haskell code by
> 
> \begin{haskell}
> \end{haskell}
> 
> ?

It very well could make sense, but we are not in re-designing here
(it would likely never be finished). For example, a quote from
http://hackage.haskell.org/trac/haskell-prime
"Haskell' will be a conservative refinement of Haskell 98.
...
We will strive to only include tried-and-true language features"

BTW, if other languages such as Coq have any other convention than
exactly the name \begin{code} / \end{code} , they could be mixed in the
same source-file anyway, I suppose.  Which could be nice, as you note.


Thanks,
Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGLdvyHgcxvIWYTTURAmvsAJ0Q11sJPmwHM/ORNeE9hqO04ePGaQCdGekm
HbMeOpa4X2pPlDp6ePbtzYs=
=pQu+
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Relax the restriction on Bounded derivation

2007-04-18 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Isaac Dupree wrote:
> However there is a good argument for having some sort of bounded-enum
> class for things that have a finite number of discrete positions. These
> have log(number of possibilities) information content and can (in
> theory) be serialized with such a number of bits known from the type.
> Designing such a class could be interesting...

In particular, this hypothetical class could be derived more generally
than Enum:
data Blah a b = Baz Int a Bool | Quux | Qx b
derived instance (Finite a, Finite b) => Finite (Blah a b)
since Int and Bool are in this class.

Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGJg8lHgcxvIWYTTURAln7AJ49TOSqXbx4GNzYti0GVuYBDPjDXQCfcDQz
BPbBk1M9cZRS24Dt6b+0inQ=
=X+1X
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Relax the restriction on Bounded derivation

2007-04-18 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Ravi Nanavati wrote:
> On 4/17/07, Neil Mitchell <[EMAIL PROTECTED]> wrote:
>>
>> Hi,
>>
>> >From Section 10 of the Haskell report, regarding automatic derivation:
>>
>> to derive Bounded for a type: "the type must be either an enumeration
>> (all constructors must be nullary) or have only one constructor."
>>
>> This seems a very artificial restriction - since it allows you to be
>> in any one of two camps, but no where in between. It also means that
>> Either doesn't derive Bounded, while it could easily do so:
>>
>> instance (Bounded a, Bounded b) => Bounded (Either a b) where
>> minBound = Left minBound
>> maxBound = Right maxBound
>>
>> So I propose that this restriction be lifted, and that the obvious
>> extension be given such that minBound is the lowest constructor with a
>> pile of minBounds, and maxBound is the highest constructor with a pile
>> of maxBound.
> 
> 
> In general, I like the idea of of allowing more flexible derivation of
> Bounded, but I'm worried your specific proposal ends up mandating the
> derivation of Bounded instances for types that aren't really "bounded"
> (used
> in a deliberately loose sense). Consider the following type:
> 
> data Foo = A Char | B Integer | C Int
> 
> On some level, there's no real problem in creating a Bounded instance as
> follows (which is how I interpret your proposal):
> 
> instance Bounded Foo
>  minBound  =  A (minBound :: Char)
>  maxBound =  C (maxBound :: Int)
> 
> On the other hand, there's a real sense in which the type isn't actually
> "bounded". For instance, if it was also an instance of Enum, enumerating
> all
> of the values from minBound to maxBound might not terminate. I'm not sure
> what to do about the scenario. Should we (unnecessarily) insist that all of
> the arguments of all of the constructors be Bounded to avoid this? Should
> Bounded more explicitly document what properties the minBound, maxBound and
> the type should satisfy? Or something else?

IMO, Bounded only needs to satisfy (if Foo is in Ord)
forall a::Foo, a >= minBound && a <= maxBound
.
I want to be able to define bounded for
data ExtendedInteger = NegativeInfinity | PlainInteger Integer |
PositiveInfinity
.  Preferably by deriving, because it's easier.
If we require properties of Enum... Enum _already_ has problems with
instances like Integer where fromEnum :: a -> Int only has a limited
possible output; there is little reasonable meaning for (fromEnum
(10 :: Integer))
(hugs: Program error: arithmetic overflow)

(Float and Double *aren't* in Bounded. Then again, Haskell98 doesn't
require them to contain non-_|_ values of +-infinity.)

Furthermore, there are bounded things that aren't enumerable anyway (I
think) (such as some lattices), so it would be odd to add that
restriction just because the type might also be in Prelude.Enum.

However there is a good argument for having some sort of bounded-enum
class for things that have a finite number of discrete positions. These
have log(number of possibilities) information content and can (in
theory) be serialized with such a number of bits known from the type.
Designing such a class could be interesting...

Rather, I would ask "Must any inhabitant of a type in Enum be reachable
by pred or succ from an arbitrary inhabitant of the type?"  For example,
I could declare an instance of Enum that contradicted that:
data Something = Some Integer | Another Integer
where pred and succ always stayed within the same constructor, and for
fromEnum/toEnum I would just find some way to encode some common (i.e.
relatively small magnitude, just as the usual instance Enum Integer is
limited this way) values of Something into an Int. Or are
fromEnum/toEnum supposed to obey some sort of properties, when they are
defined, relative to the rest of the methods? I would guess not, given
the comment
- -- NOTE: these default methods only make sense for types
- --   that map injectively into Int using fromEnum
- --  and toEnum.
(hugs: fromEnum (2.6 :: Double) ---> 2)


Cheers,
Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGJgjWHgcxvIWYTTURAlL/AJ97SilRhmd8B59TAAX+Hcyjly5oHQCff0fa
5B4Y9m0Zb3vQtilZr4lRQs0=
=Qn2+
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Literate Haskell specification

2007-04-06 Thread Isaac Dupree
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

As I brought up earlier in Haskell-cafe
http://thread.gmane.org/gmane.comp.lang.haskell.cafe/20026
, the Haskell98 specification for literate haskell (report section 9.4)
could use some work, at least clarifications (existing haskell
implementations differ in some ways) - see that thread for details.
Since I haven't successfully gotten to writing a concrete revision of
that section, I thought I'd at least bring the issue to the attention of
specifically haskell-prime people, as it is an issue that "should
definitely" be addressed in the Report.  Hopefully there's someone
around here who might tackle it :)


Good luck,
Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGFppvHgcxvIWYTTURAgoqAKDKFXaVBVeSfodzlgEte5Loy42unACfXfbl
qDlxnSUaNH5rg8r58KAkKl8=
=UVvx
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime