Re: [Haskell-cafe] Tutorial on JS with Haskell: Fay or GHCJS?

2013-09-04 Thread Chris Smith
I second the recommendation to look at Haste.  It's what I would pick for a
project like this today.

In the big picture, Haste and GHCJS are fairly similar.  But when it comes
to the ugly details of the runtime system, GHCJS adopts the perspective
that it's basically an emulator, where compatibility is the number one
goal.  Haste goes for a more native approach; while the evaluation
semantics and such are completely faithful to Haskell, it doesn't go out of
the way to emulate the gritty details of GHC's runtime system.
On Sep 4, 2013 3:38 AM, "Nathan Hüsken"  wrote:

>  In my opinion haste is somewhere between Fay and ghcjs. It supports more
> than Fay, but in difference to ghcjs some PrimOps are not supported (weak
> pointers for example).
>
> It is a little bit more "direct" than ghcjs, in the sense that it does not
> need such a big rts written in js.
>
> I like haste :).
>
> What I wonder is how the outputs of these 3 compilers compare speed wise.
>
> On 09/04/2013 11:11 AM, Alejandro Serrano Mena wrote:
>
> I haven't looked at Haste too much, I'll give it a try.
>
>  My main problem is that I would like to find a solution that will
> continue working in years (somehow, that will became "the" solution for
> generating JS from Haskell code). That's why I see GHCJS (which just
> includes some patches to mainstream GHC) as the preferred solution, because
> it seems the most probable to continue working when new versions of GHC
> appear.
>
>
> 2013/9/4 Niklas Hambüchen 
>
>> Hi, I'm also interested in that.
>>
>> Have you already evaluated haste?
>>
>> It does not seem to have any of your cons, but maybe others.
>>
>> What I particularly miss from all solutions is the ability to simply
>> call parts written in Haskell from Javascript, e.g. to write `fib` and
>> then integrate it into an existing Javascript application (they are all
>> more interested in doing the other direction).
>>
>> On Wed 04 Sep 2013 17:14:55 JST, Alejandro Serrano Mena wrote:
>> > Hi,
>> > I'm currently writing a tutorial on web applications using Haskell. I
>> > know the pros and cons of each server-side library (Yesod, Snap,
>> > Scotty, Warp, Happstack), but I'm looking for the right choice for
>> > client-side programming that converts Haskell to JavaScript. I've
>> > finally come to Fay vs. GHCJS, and would like your opinion on what's
>> > the best to tackle. My current list of pros and cons is:
>> >
>> > Fay
>> > ===
>> > Pros:
>> > - Does not need GHC 7.8
>> > - Easy FFI with JS
>> > - Has libraries for integration with Yesod and Snap
>> >
>> > Cons:
>> > - Only supports a subset of GHC (in particular, no type classes)
>> >
>> >
>> > GHCJS
>> > ==
>> > Pros:
>> > - Supports full GHC
>> > - Easy FFI with JS
>> > - Highly opinionated point: will stay longer than Fay (but it's very
>> > important for not having a tutorial that is old in few months)
>> >
>> > Cons:
>> > - Needs GHC 7.8 (but provides a Vagrant image)
>> >
>> >
>>  > ___
>> > Haskell-Cafe mailing list
>> > Haskell-Cafe@haskell.org
>> > http://www.haskell.org/mailman/listinfo/haskell-cafe
>>
>
>
>
> ___
> Haskell-Cafe mailing 
> listHaskell-Cafe@haskell.orghttp://www.haskell.org/mailman/listinfo/haskell-cafe
>
>
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Possible extension to Haskell overloading behavior

2013-07-09 Thread Chris Smith
This is working now.  Trying to use -XRebindableSyntax with
-XImplicitPrelude seems to not work (Prelude is still not loaded) when the
exposed Prelude is from base, but it works fine when the Prelude is from a
different package.  Counterintuitive, but it does everything I need it to.
Thanks for the suggestion!
On Jul 9, 2013 4:20 PM, "Aleksey Khudyakov" 
wrote:

> On 10.07.2013 01:13, Chris Smith wrote:
>
>> Ugh... I take back the never mind.  So if I replace Prelude with an
>> alternate definition, but don't use RebindableSyntax, and then hide
>> the base package, GHC still uses fromInteger and such from base even
>> though it should be inaccessible.  But if I do use RebindableSyntax,
>> then the end-user has to add 'import Prelude' to the top of their
>> code.  Am I missing something?
>>
>>  If base is hidden GHCi refuses to start becaus it can't import Prelude
> (with -XNoImplicitPrelude it starts just fine).
>
> According to documentation GHC will use whatever fromInteger is in scope.
> But I never used extension in such way.
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Possible extension to Haskell overloading behavior

2013-07-09 Thread Chris Smith
Ugh... I take back the never mind.  So if I replace Prelude with an
alternate definition, but don't use RebindableSyntax, and then hide
the base package, GHC still uses fromInteger and such from base even
though it should be inaccessible.  But if I do use RebindableSyntax,
then the end-user has to add 'import Prelude' to the top of their
code.  Am I missing something?

On Tue, Jul 9, 2013 at 1:51 PM, Chris Smith  wrote:
> Oh, never mind.  In this case, I guess I don't need an extension at all!
>
> On Tue, Jul 9, 2013 at 1:47 PM, Chris Smith  wrote:
>> Oh, yes.  That looks great!  Also seems to work with OverloadedStrings
>> in the natural way in GHC 7.6, although that isn't documented.
>>
>> Now if only it didn't force NoImplicitPrelude, since I really want to
>> -hide-package base and -package my-other-prelude.  Even adding
>> -XImplicitPrelude doesn't help.
>>
>> On Tue, Jul 9, 2013 at 11:46 AM, Aleksey Khudyakov
>>  wrote:
>>> On 08.07.2013 23:54, Chris Smith wrote:
>>>>
>>>> So I've been thinking about something, and I'm curious whether anyone
>>>> (in particular, people involved with GHC) think this is a worthwhile
>>>> idea.
>>>>
>>>> I'd like to implement an extension to GHC to offer a different
>>>> behavior for literals with polymorphic types.  The current behavior is
>>>> something like:
>>>>
>>> Probably RebidableSyntax[1] could work for you. From description it
>>> allows to change meaning of literals.
>>>
>>> [1]
>>> http://www.haskell.org/ghc/docs/7.6.3/html/users_guide/syntax-extns.html#rebindable-syntax
>>>
>>> ___
>>> Haskell-Cafe mailing list
>>> Haskell-Cafe@haskell.org
>>> http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Possible extension to Haskell overloading behavior

2013-07-09 Thread Chris Smith
Oh, never mind.  In this case, I guess I don't need an extension at all!

On Tue, Jul 9, 2013 at 1:47 PM, Chris Smith  wrote:
> Oh, yes.  That looks great!  Also seems to work with OverloadedStrings
> in the natural way in GHC 7.6, although that isn't documented.
>
> Now if only it didn't force NoImplicitPrelude, since I really want to
> -hide-package base and -package my-other-prelude.  Even adding
> -XImplicitPrelude doesn't help.
>
> On Tue, Jul 9, 2013 at 11:46 AM, Aleksey Khudyakov
>  wrote:
>> On 08.07.2013 23:54, Chris Smith wrote:
>>>
>>> So I've been thinking about something, and I'm curious whether anyone
>>> (in particular, people involved with GHC) think this is a worthwhile
>>> idea.
>>>
>>> I'd like to implement an extension to GHC to offer a different
>>> behavior for literals with polymorphic types.  The current behavior is
>>> something like:
>>>
>> Probably RebidableSyntax[1] could work for you. From description it
>> allows to change meaning of literals.
>>
>> [1]
>> http://www.haskell.org/ghc/docs/7.6.3/html/users_guide/syntax-extns.html#rebindable-syntax
>>
>> ___
>> Haskell-Cafe mailing list
>> Haskell-Cafe@haskell.org
>> http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Possible extension to Haskell overloading behavior

2013-07-09 Thread Chris Smith
Oh, yes.  That looks great!  Also seems to work with OverloadedStrings
in the natural way in GHC 7.6, although that isn't documented.

Now if only it didn't force NoImplicitPrelude, since I really want to
-hide-package base and -package my-other-prelude.  Even adding
-XImplicitPrelude doesn't help.

On Tue, Jul 9, 2013 at 11:46 AM, Aleksey Khudyakov
 wrote:
> On 08.07.2013 23:54, Chris Smith wrote:
>>
>> So I've been thinking about something, and I'm curious whether anyone
>> (in particular, people involved with GHC) think this is a worthwhile
>> idea.
>>
>> I'd like to implement an extension to GHC to offer a different
>> behavior for literals with polymorphic types.  The current behavior is
>> something like:
>>
> Probably RebidableSyntax[1] could work for you. From description it
> allows to change meaning of literals.
>
> [1]
> http://www.haskell.org/ghc/docs/7.6.3/html/users_guide/syntax-extns.html#rebindable-syntax
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Possible extension to Haskell overloading behavior

2013-07-08 Thread Chris Smith
Oops, when I wrote this, I'd assumed it was possible to export
defaults from a module, like an alternate Prelude.  But it looks like
they only affect the current module.  So this whole thing depends on
also being able to either define defaults in an imported module, or in
options to GHC.

On Mon, Jul 8, 2013 at 12:54 PM, Chris Smith  wrote:
> So I've been thinking about something, and I'm curious whether anyone
> (in particular, people involved with GHC) think this is a worthwhile
> idea.
>
> I'd like to implement an extension to GHC to offer a different
> behavior for literals with polymorphic types.  The current behavior is
> something like:
>
> 1. Give the literal a polymorphic type, like (Integral a => a)
> 2. Type check the whole program, possibly giving the term a more
> constrained type.
> 3. If the type is still ambiguous, apply defaulting rules.
>
> I'd like to add the option to do this instead.
>
> 1. Take the polymorphic type, and immediately apply defaulting rules
> to get a monomorphic type.
> 2. Type check the program with the monomorphic type.
>
> Mostly, this would reduce the set of valid programs, since the type is
> chosen before considering whether it meets all the relevant
> constraints.  So what's the purpose?  To simplify type errors for
> programmers who don't understand type classes.  What I have in mind is
> domain-specific dialects of Haskell that replace the Prelude and are
> aimed at less technical audiences - in my case, children around 10 to
> 13 years old; but I think the ideas apply elsewhere, too.  Type
> classes are (debatably) the one feature of Haskell that tends to be
> tricky for non-technical audiences, and yet pops up in very simple
> programs (and more importantly, their error messages) even when the
> programmer wasn't aware of it's existence, because of its role in
> overloaded literals.
>
> In some cases, I think it's a good trade to remove overloaded
> literals, in exchange for simpler error messages.  This leaves new
> programmers learning a very small, simple language, and not staring so
> much at cryptic error messages.  At the same time, it's not really
> changing the language, except for the need to explicitly use type
> classes (via conversion functions like fromInteger) rather than get
> them thrown in implicitly.  With GHC's extended defaulting rules that
> apply for OverloadedStrings, this could also be used to treat all
> string literals as Text, too, which might make some people happy, too.
>
> Of course, the disadvantage is that for numeric types, you would lose
> the convenience of overloaded operators, since this is only a sensible
> thing to do if you're replacing the Prelude with one that doesn't use
> type classes.  But in at least my intended use, I prefer to have a
> single Number type anyway (and a single Text type that's not sometimes
> called [Char]).  In the past, explaining these things has eaten up far
> too much time that I'd rather have spent on more general skills and
> creative activities.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Possible extension to Haskell overloading behavior

2013-07-08 Thread Chris Smith
So I've been thinking about something, and I'm curious whether anyone
(in particular, people involved with GHC) think this is a worthwhile
idea.

I'd like to implement an extension to GHC to offer a different
behavior for literals with polymorphic types.  The current behavior is
something like:

1. Give the literal a polymorphic type, like (Integral a => a)
2. Type check the whole program, possibly giving the term a more
constrained type.
3. If the type is still ambiguous, apply defaulting rules.

I'd like to add the option to do this instead.

1. Take the polymorphic type, and immediately apply defaulting rules
to get a monomorphic type.
2. Type check the program with the monomorphic type.

Mostly, this would reduce the set of valid programs, since the type is
chosen before considering whether it meets all the relevant
constraints.  So what's the purpose?  To simplify type errors for
programmers who don't understand type classes.  What I have in mind is
domain-specific dialects of Haskell that replace the Prelude and are
aimed at less technical audiences - in my case, children around 10 to
13 years old; but I think the ideas apply elsewhere, too.  Type
classes are (debatably) the one feature of Haskell that tends to be
tricky for non-technical audiences, and yet pops up in very simple
programs (and more importantly, their error messages) even when the
programmer wasn't aware of it's existence, because of its role in
overloaded literals.

In some cases, I think it's a good trade to remove overloaded
literals, in exchange for simpler error messages.  This leaves new
programmers learning a very small, simple language, and not staring so
much at cryptic error messages.  At the same time, it's not really
changing the language, except for the need to explicitly use type
classes (via conversion functions like fromInteger) rather than get
them thrown in implicitly.  With GHC's extended defaulting rules that
apply for OverloadedStrings, this could also be used to treat all
string literals as Text, too, which might make some people happy, too.

Of course, the disadvantage is that for numeric types, you would lose
the convenience of overloaded operators, since this is only a sensible
thing to do if you're replacing the Prelude with one that doesn't use
type classes.  But in at least my intended use, I prefer to have a
single Number type anyway (and a single Text type that's not sometimes
called [Char]).  In the past, explaining these things has eaten up far
too much time that I'd rather have spent on more general skills and
creative activities.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Markdown extension for Haddock as a GSoC project

2013-04-28 Thread Chris Smith
On Apr 28, 2013 6:42 PM, "Alexander Solla"  wrote:
> I think that much has to do with the historical division in computer
science.  We have mathematics on the right hand, and electrical engineering
on the wrong one.

I've been called many things, but electrical engineer is a new one!

My point was not anything at all to do with programming.  It was about
writing comments, which is fundamentally a communication activity.  That
makes a difference.  It's important to keep in mind that the worst possible
consequence of getting corner cases wrong here is likely that some
documentation will be confusing because the numbering is messed up in an
ordered list.

Pointing out that different processors treat markdown differently with
respect to bold or italics and such is ultimately missing the point.  For
example, I an aware that Reddit treats *foo* like italics while, say,
Google+ puts it in bold... but I really don't care.  What is really of any
importance is that both of them take reasonable conventions from plain text
and render them reasonably.  As far as I'm concerned, you can flip a coin
as to whether it ends up in bold or italics.

That doesn't mean the choices should not be documented.  Sure they should.
But it seems ridiculous to sidetrack the proposal to do something nice by
concerns about the minutiae of the syntax.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Markdown extension for Haddock as a GSoC project

2013-04-28 Thread Chris Smith
I think it's worth backing up here, and remembering the original point
of the proposal, by thinking about what is and isn't a goal.  I think
I'd classify things like this:

Goals:
- Use a lightweight, common, and familiar core syntax for simple formatting.
- Still allow haddock-specific stuff like links to other symbols.

Non-Goals:
- Compliance/compatibility with any specification or other system.
- Have any kind of formal semantics.

The essence of this proposal is about making Haddock come closer to
matching all the other places where people type formatted text on the
Internet.  As Johan said, markdown has won.  But markdown won because
it ended up being a collection of general conventions with
compatibility for the 95% of commonly used bits... NOT a formal
specification.  If there are bits of markdown that are just really,
really awkward to use in Haddock, modify them or leave them out.  I
think the whole discussion is getting off on the wrong start by
looking for the right specification against which this should be
judged, when it's really just about building the most natural possible
ad-hoc adaptation of markdown to documentation comments.  Just doing
the easy stuff, like switching from /foo/ to *foo* for emphasis,
really is most of the goal.  Anything beyond that is even better.

Compatibility or compliance to a specification are non-issues: no one
is going to be frequently copying documentation comments to and from
other markdown sources.  Haddock will unavoidably have its own
extensions for references to other definitions anyway, as will the
other system, so it won't be compatible.  Let's just accept that.

Formal semantics is a non-issue: the behavior will still be defined by
the implementation, in that people will write their documentation, and
if it looks wrong, they will fix it.  We don't want to reason formally
about the formatting of our comments, or prove that it's correct.
Avoiding unpleasant surprises is good; but avoiding *all* possible
ambiguous corner cases in parsing, even when they are less likely than
typos, is not particularly important.  If some ambiguity becomes a big
problem, it will get fixed later as a bug.

I think the most important thing here is to not get caught up in
debates about advanced syntax or parsing ambiguities, or let that stop
us from being able to emphasize words the same way we do in the whole
rest of the internet.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Markdown extension for Haddock as a GSoC project

2013-04-27 Thread Chris Smith
Oops, forgot to reply all.
-- Forwarded message --
From: "Chris Smith" 
Date: Apr 27, 2013 12:04 PM
Subject: Re: [Haskell-cafe] Markdown extension for Haddock as a GSoC project
To: "Bryan O'Sullivan" 
Cc:

I don't agree with this at all.  Far more important than which convention
gets chosen is that Haskell code can be read and written without learning
many dialects of Haddock syntax.  I see an API for pluggable haddock syntax
as more of a liability than a benefit.  Better to just stick to what we
have than fragment into more islands.

I do think that changing Haddock syntax to include common core pieces of
Markdown could be a positive change... but not if it spawns a battle of
fragmented documentation syntax that lasts a decade.
On Apr 27, 2013 11:08 AM, "Bryan O'Sullivan"  wrote:

> On Sat, Apr 27, 2013 at 2:23 AM, Alistair Bayley wrote:
>
>> How's about Creole?
>> http://wikicreole.org/
>>
>> Found it via this:
>>
>> http://www.wilfred.me.uk/blog/2012/07/30/why-markdown-is-not-my-favourite-language/
>>
>> If you go with Markdown, I vote for one of the Pandoc implementations,
>> probably Pandoc (strict):
>> http://johnmacfarlane.net/babelmark2/
>>
>> (at least then we're not creating yet another standard...)
>>
>
> Probably the best way to deal with this is by sidestepping it: make the
> support for alternative syntaxes as modular as possible, and choose two to
> start out with in order to get a reasonable shot at constructing a suitable
> API.
>
> I think it would be a shame to bikeshed on which specific syntaxes to
> support, when a lot of productive energy could more usefully go into
> actually getting the work done. Better to say "prefer a different markup
> language? code to this API, then submit a patch!"
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] unsafeInterleaveST (and IO) is really unsafe [was: meaning of "referential transparency"]

2013-04-12 Thread Chris Smith
On Fri, Apr 12, 2013 at 1:44 AM,   wrote:
> As to alternatives -- this is may be the issue of
> familiarity or the availability of a nice library of combinators.

It is certainly not just a matter of familiarity, nor availability.
Rather, it's a matter of the number of names that are required in a
working set.  Any Haskell programmer, regardless of whether they use
lazy I/O, will already know the meanings of (.), length, and filter.
On the other hand, (>$<), count_i, and filterL are new names that must
be learned from yet another library -- and much harder than learned,
also kept in a mental working set of fluency.

This ends up being a rather strong argument for lazy I/O.  Not that
the code is shorter, but that it (surprisingly) unifies ideas that
would otherwise have required separate vocabulary.

I'm not saying it's a sufficient argument, just that it's a much
stronger one than familiarity, and that it's untrue that some better
library might achieve the same thing without the negative
consequences.  (If you're curious, I do believe that it often is a
sufficient argument in certain environments; I just don't think that's
the kind of question that gets resolved in mailing list threads.)

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GSOC application level

2013-03-06 Thread Chris Smith
"Mateusz Kowalczyk"  wrote:
>
> I know that this year's projects aren't up
> yet

Just to clarify, there isn't an official list of projects for you to choose
from.  The project that you purpose is entirely up to you.  There is a list
of recommendations at
http://hackage.haskell.org/trac/summer-of-code/report/1 and another list of
ideas at http://reddit.com/r/haskell_proposals -- but keep in mind that you
ultimately make your own choice about what you propose, and it doesn't have
to be selected from those lists.  You can start writing your perusal today
if you like.

Having an unusually good idea is a great way to get selected even if you
don't have an established body of work to point to.  Just keep in mind that
proposals are evaluated not just on the benefit if they are completed, but
also on their likelihood of success... a good idea is both helpful and
realistic.  They are also evaluated on their benefit to the actual Haskell
community... so of that's not something you have a good fell for, I'd
suggest getting involved.  Follow reddit.com/r/haskell, read this mailing
list, read Haskell blogs from planet.haskell.org, and get familiar with
what Haskellers are concerned about and interested in.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lambda case (was Re: A big hurray for lambda-case (and all the other good stuff))

2012-12-30 Thread Chris Smith
On Sun, Dec 30, 2012 at 8:51 AM, David Thomas wrote:

> Jon's suggestion sounds great.
>
> The bike shed should be green.
>

There were plenty of proposals that would work fine.  `case of` was great.
 `\ of` was great.  It's less obvious to me that stand-alone `of` is never
ambiguous... but if that's true, it's reasonable.  Sadly, the option that
was worse that doing nothing at all is what was implemented.

The "bikeshedding" nonsense is frustrating.  Bikeshedding is about wasting
time debating the minutia of a significant improvement, when everyone
agrees the improvement is a good idea.  Here, what happened was that
someone proposed a minor syntax tweak (from `\x -> case x of` to `case
of`), other reasonable minor syntax tweaks were proposed instead to
accomplish the same goal, and then in the end, out of the blue, it was
decided to turn `case` into a layout-inducing keyword (or even worse, only
sometimes but not always layout-inducing).

There is no bike shed here.  There are just colors (minor syntax tweaks).
 And I don't get the use of "bikeshedding" as basically just a rude comment
to be made at people who don't like the same syntax others do.

-- 
Chris
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Categories (cont.)

2012-12-21 Thread Chris Smith
It would definitely be nice to be able to work with a partial Category
class, where for example the objects could be constrained to belong to a
class.  One could then restrict a Category to a type level representation
of the natural numbers or any other desired set.  Kind polymorphism should
make this easy to define, but I still don't have a good feel for whether it
is worth the complexity.
On Dec 21, 2012 6:37 AM, "Tillmann Rendel" 
wrote:

> Hi,
>
> Christopher Howard wrote:
>
>> instance Category ...
>>
>
> The Category class is rather restricted:
>
> Restriction 1:
> You cannot choose what the objects of the category are. Instead, the
> objects are always "all Haskell types". You cannot choose anything at all
> about the objects.
>
> Restriction 2:
> You cannot freely choose what the morphisms of the category are. Instead,
> the morphisms are always Haskell values. (To some degree, you can choose
> *which* values you want to use).
>
>
> These restrictions disallow many categories. For example, the category
> where the objects are natural numbers and there is a morphism from m to n
> if m is greater than or equal to n cannot be expressed directly: Natural
> numbers are not Haskell types; and "is bigger than or equal to" is not a
> Haskell value.
>
>   Tillmann
>
> __**_
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/**mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] containers license issue

2012-12-17 Thread Chris Smith
"Ketil Malde"  wrote:
> The point of the point is that neither of these are translations of
> literary works, there is no precedence for considering them as such, and
> that reading somebody's work (whether literary or source code) before
> writing one's own does not imply that the 'somebody' will hold any
> rights to the subsequent work.

So IANAL, but I do have an amateur interest in copyright law.  The debate
over the word "translation" is completely irrelevant.  The important point
is whether it is a "derived work".  That phrase certainly includes more
than mere translation.  For example, it includes writing fiction that's set
in the same fantasy universe or HHS the same characters as another author's
works.  It also includes making videos with someone else's music playing in
the background. If you create a derived work, then the author of the
original definitely has rights to it, regardless of whether it is a mere
translation.  That's also why the word "derived" in a comment was
particularly Dacey to the legal staff and probably caused them to overreact
in this case.

The defense in the case of software is to say that the part that was copied
was not a work of authorship in the sense that, say, a fiction character
is.  This is generally not a hard case to win, since courts see computer
software as dominated by its practical function.  But if you copied
something that was clearly a matter of expression and not related to the
function of the software, you could very well be creating a derived work
over which the original author could assert control.

That said, I agree that in this particular case it's very unlikely that the
original author could have won an infringement case.  I just balked a
little at the statements about translation, which was really just an
example.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] containers license issue

2012-12-12 Thread Chris Smith
"Clint Adams"  wrote:
>
> On Wed, Dec 12, 2012 at 10:45:48PM -0500, Clark Gaebel wrote:
> > This is a silly issue.
>
> It certainly seems to be.  If it were serious, I'd like to think
> that people would be attempting to get actual legal advice
> instead of spouting anti-copyleft FUD.

Well, actual legal advice comes from actual lawyers, who often want actual
money.

I'm interested in what you saw as "anti-copyleft FUD" though.  That the
code might be subject to the GPL and that caused problems?  That's the only
thing that did come from a lawyer.  And it's really the only negative thing
I saw about the GPL in this thread.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Call for discussion: OverloadedLists extension

2012-09-23 Thread Chris Smith
Michael Snoyman  wrote:
> That said, it would be great to come up with ways to mitigate the
> downsides of unbounded polymorphism that you bring up. One idea I've
> seen mentioned before is to modify these extension so that they target
> a specific instance of IsString/IsList, e.g.:
>
> {-# STRING_LITERALS_AS Text #-}
>
> "foo" ==> (fromString "foo" :: Text)

That makes sense for OverloadedStrings, but probably not for
OverloadedLists or overloaded numbers... String literals have the
benefit that there's one type that you probably always really meant.
The cases where you really wanted [Char] or ByteString are rare.  On
the other hand, there really is no sensible "I always want this"
answer for lists or numbers.  It seems like a kludge to do it
per-module if each module is going to give different answers most of
the time.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Over general types are too easy to make.

2012-09-02 Thread Chris Smith
On Sun, Sep 2, 2012 at 9:40 AM,   wrote:
> The thing is, that one ALWAYS wants to create a union of types, and not
> merely an ad-hock list of data declarations.  So why does it take more code
> to do "the right thing(tm)" than to do "the wrong thing(r)"?

You've said this a few times, that you run into this constantly, or
even that everyone runs into this.  But I don't think that's the case.
 It's something that happens sometimes, yes, but if you're running
into this issue for every data type that you declare, that is
certainly NOT just normal in Haskell programming.  So in that sense,
many of the answers you've gotten - to use a GADT, in particular -
might be great advice in the small subset of cases where average
Haskell programmers want more complex constraints on types; but it's
certainly not a good idea to do to every data type in your
application.

I don't have the answer for you about why this always happens to you,
but it's clear that there's something there - perhaps a stylistic
issue, or a domain-specific pattern, or something... - that's causing
you to face this a lot more frequently than others do.  If I had to
take a guess, I'd say that you're breaking things down into fairly
complex monolithic parts, where a lot of Haskell programmers will have
a tendency to work with simpler types and break things down into
smaller pieces.  But... who knows... I haven't seen the many cases
where this has happened to you.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Chris Smith
Twan van Laarhoven  wrote:
> Would adding a single convenience function be low or high risk? You say it
> is low risk, but it still risks breaking a build if a user has defined a
> function with the same name.

Yes, it's generally low-risk, but there is *some* risk.  Of course, it
could be high risk if you duplicate a Prelude function or a name that
you know is in use elsewhere in a related or core library... these
decisions would involve knowing something about the library space,
which package maintainers often do.

> I think the only meaningful distinction you can make are:

Except that the whole point is that this is *not* the only distinction
you can make.  It might be the only distinction with an exact
definition that can be checked by automated tools, but that doesn't
change the fact that when I make an incompatible change to a library
I'm maintaining, I generally have a pretty good idea of which kinds of
users are going to be fixing their code as a result.  The very essence
of my suggestion was that we accept the fact that we are working in
probabilities here, and empower package maintainers to share their
informed evaluation.  Right now, there's no way to provide that
information: the PVP is caught up in exactly this kind of legalism
that only cares whether a break is possible or impossible, without
regard to how probable it is.  The complaint that this new mechanism
doesn't have exactly such a black and white set of criteria associated
with it is missing the point.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Chris Smith
I am tentatively in agreement that upper bounds are causing more
problems than they are solving.  However, I want to suggest that
perhaps the more fundamental issue is that Cabal asks the wrong person
to answer questions about API stability.  As a package author, when I
release a new version, I know perfectly well what incompatible changes
I have made to it... and those might include, for example:

1. New modules, exports or instances... low risk
2. Changes to less frequently used, advanced, or "internal" APIs...
moderate risk
3. Completely revamped commonly used interfaces... high risk

Currently *all* of these categories have the potential to break
builds, so require the big hammer of changing the first-dot version
number.  I feel like I should be able to convey this level of risk,
though... and it should be able to be used by Cabal.  So, here's a
proposal just to toss out there; no idea if it would be worth the
complexity or not:

A. Cabal files should get a new "Compatibility" field, indicating the
level of compatibility from the previous release: low, medium, high,
or something like that, with definitions for what each one means.

B. Version constraints should get a new syntax:

bytestring ~ 0.10.* (allow later versions that indicate low or
moderate risk)
bytestring ~~ 0.10.* (allow later versions with low risk; we use
the dark corners of this one)
bytestring == 0.10.* (depend 100% on 0.10, and allow nothing else)

Of course, this adds a good bit of complexity to the constraint
solver... but not really.  It's more like a pre-processing pass to
replace fuzzy constraints with precise ones.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Call to arms: lambda-case is stuck and needs your help

2012-07-06 Thread Chris Smith
Whoops, my earlier answer forgot to copy mailing lists... I would love to
see \of, but I really don't think this is important enough to make case
sometimes introduce layout and other times not.  If it's going to obfuscate
the lexical syntax like that, I'd rather just stick with \x->case x of.
On Jul 6, 2012 3:15 PM, "Strake"  wrote:

> On 05/07/2012, Mikhail Vorozhtsov  wrote:
> > Hi.
> >
> > After 21 months of occasional arguing the lambda-case proposal(s) is in
> > danger of being buried under its own trac ticket comments. We need fresh
> > blood to finally reach an agreement on the syntax. Read the wiki
> > page[1], take a look at the ticket[2], vote and comment on the proposals!
> >
>
> +1 for "\ of" multi-clause lambdas
>
> It looks like binding "of" to me, which it ain't, but it is nicely brief...
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Current uses of Haskell in industry?

2012-06-13 Thread Chris Smith
It turns out I'm filling in for a cancelled speaker at a local open
source user group, and doing a two-part talk, first on Haskell and
then Snap.  For the Haskell part, I'd like a list of current places
the language is used in industry.  I recall a few from Reddit stories
and messages here and other sources, but I wonder if anyone is keeping
a list.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Fwd: Problem with forall type in type declaration

2012-05-04 Thread Chris Smith
Oops, forgot to reply-all again...

-- Forwarded message --
From: Chris Smith 
Date: Fri, May 4, 2012 at 8:46 AM
Subject: Re: [Haskell-cafe] Problem with forall type in type declaration
To: Magicloud Magiclouds 


On Fri, May 4, 2012 at 2:34 AM, Magicloud Magiclouds
 wrote:
> Sorry, it was just a persudo code. This might be more clear:
>
> run :: (Monad m) => m IO a -> IO a

Unfortunately, that's not more clear.  For the constraint (Monad m) to
hold, m must have the kind (* -> *), so then (m IO a) is meaningless.
I assume you meant one of the following:

   run :: MonadTrans m => m IO a -> IO a

or

   run :: MonadIO m => m a -> IO a

(Note that MonadIO is the class from the mtl package; there is no space there).

Can you clarify which was meant?  Or perhaps you meant something else entirely?

--
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.1.0

2012-04-17 Thread Chris Smith
Paolo,

This new pipes-core release looks very nice, and I'm happy to see
exception and finalizer safety while still retaining the general
structure of the original pipes package.  One thing that Gabriel and
Michael have been talking about, though, that seems to be missing
here, is a way for a pipe to indicate that it's finished with its
upstream portion, so that upstream finalizers can be immediately run
without waiting for the downstream parts of the pipe to complete.

Do you have an answer for this?  I've been puzzling it out this
morning, but it's unclear to me how something like this interacts with
type safety and exception handling.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] open source project for student

2012-04-11 Thread Chris Smith
Hmm, tough to answer without more to go on.  I think if I were in your
shoes I'd ask myself where I'm most happy outside of programming.  A lot of
good entry level open source work involves combining programming with other
skills.

Are you an artist?  Have a talent for strong design and striking expression?

Are you an organizer or a communicator?  The sort of person who draws
diagrams and talks to yourself practicing better ways to explain cool ideas
in simple terms?

Are you a scrappy tinkerer?  Someone who knows how to get your hands dirty
in a productive way before you're an expert?  A wiz with unit testing and
profiling tools?

I do have an education-related project I'm working on where being a smart
but inexperienced programmer might be an advantage.  But it's a question of
whether it's a good fit for what you're looking for.  Email me if you may
be interested in that.
On Apr 11, 2012 3:53 PM, "Dan Cristian Octavian" 
wrote:

> Hello,
>
> I am a second year computer science student who is very interested in
>  working on a haskell open source project. I have no particular focus on a
> certain type of application. I am open to ideas and to exploring new
> fields. What kind of project should I look for considering that I am a
> beginner? (Any particular project proposals would be greatly appreciated).
>
> Is the entry bar too high for most projects out there for somebody lacking
> experience such as me so that I should try getting some experience on my
> own first?
>
> Would it be a better idea to try to hack on my own project rather than
> helping on an existing one?
>
> Thank you very much for your help.
>
>
>
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] adding the elements of two lists

2012-03-26 Thread Chris Smith
On Mon, Mar 26, 2012 at 10:18 AM, Jerzy Karczmarczuk
 wrote:
> So, * the addition* is not invertible, why did you introduce rings ...

My intent was to point out that the Num instance that someone
suggested for Num a => Num [a] was a bad idea.  I talked about rings
because they are the uncontroversial part of the laws associated with
Num: I think everyone would agree that the minimum you should expect
of an instance of Num is that its elements form a ring.

In any case, the original question has been thoroughly answered... the
right answer is that zipWith is far simpler than the code in the
question, and that defining a Num instance is possible, but a bad idea
because there's not a canonical way to define a ring on lists.  The
rest of this seems to have devolved into quite a lot of bickering and
one-ups-manship, so I'll back out now.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] adding the elements of two lists

2012-03-26 Thread Chris Smith
Jerzy Karczmarczuk  wrote:
> Le 26/03/2012 02:41, Chris Smith a écrit :
>> Of course there are rings for which it's possible to represent the
>> elements as lists.  Nevertheless, there is definitely not one that
>> defines (+) = zipWith (+), as did the one I was responding to.
>
> What?
>
> The additive structure does not define a ring.
> The multiplication can be a Legion, all different.

I'm not sure I understand what you're saying there.  If you were
asking about why there is no ring on [a] that defines (+) = zipWith
(+), then here's why.  By that definition, you have [1,2,3] + [4,5] =
[5,7].  But also [1,2,42] + [4,5] = [5,7].  Addition by [4,5] is not
one-to-one, so [4,5] cannot be invertible.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] adding the elements of two lists

2012-03-25 Thread Chris Smith
Jerzy Karczmarczuk  wrote:
> Le 26/03/2012 01:51, Chris Smith a écrit :
>
>>     instance (Num a) => Num [a] where
>>     xs + ys = zipWith (+) xs ys
>>
>> You can do this in the sense that it's legal Haskell... but it is a bad idea 
>> [...]

> It MIGHT be a ring or not. The "real problem" is that one should not confuse
> structural and algebraic (in the "classical" sense) properties of your
> objects.

Of course there are rings for which it's possible to represent the
elements as lists.  Nevertheless, there is definitely not one that
defines (+) = zipWith (+), as did the one I was responding to.  By the
time you get a ring structure back by some *other* set of rules,
particularly for multiplication, the result will so clearly not be
anything like a general Num instance for lists that it's silly to even
be having this discussion.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] adding the elements of two lists

2012-03-25 Thread Chris Smith
"Jonathan Grochowski"  wrote:
> As Michael suggests using zipWith (+) is the simplest solution.
>
> If you really want to be able to write [1,2,3] + [4,5,6], you can define
the instnace as
>
> instance (Num a) => Num [a] where
> xs + ys = zipWith (+) xs ys

You can do this in the sense that it's legal Haskell... but it is a bad
idea to make lists an instance of Num, because there are situations where
the result doesn't act as you would like (if you've had abstract algebra,
the problem is that it isn't a ring).

More concretely, it's not hard to see that the additive identity is
[0,0,0...], the infinite list of zeros.  But if you have a finite list x,
then x - x is NOT equal to that additive identity!  Instead, you'd only get
a finite list of zeros, and if you try to do math with that later, you're
going to accidentally truncate some answers now and then and wonder what
went wrong.

In general, most type classes in Haskell are like this... the compiler only
cares that you provide operations with certain types, but the type class
also carries around additional "laws" that you should obey when writing
instances.  Here there's no good way to write an instance that obeys the
laws, so it's better to write no instance at all.

-- 
Chris Smith
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Google Summer of Code idea of project & application

2012-03-19 Thread Chris Smith
On Mon, Mar 19, 2012 at 7:52 PM, Richard O'Keefe  wrote:
> As just one example, a recent thread concerned implementing
> lock-free containers.  I don't expect converting one of those
> to OCaml to be easy...

If you translate to core first, then the only missing bit is the
atomic compare-and-swap primop that these structures will depend on.
Maybe that exists in OCaml, or maybe not... I wouldn't know.  If not,
it would be perfectly okay to refuse to translate the atomic
compare-and-swap primop that lockless data structures will use.  That
said, though, there are literally *hundreds* of GHC primops for tiny
little things like comparing different sized integers and so forth,
that would need to be implemented all on top of the interesting
task of doing language translation.  That should be kept in mind when
estimating the task.

> If, however, you want to make it possible for someone to
> write code in a sublanguage of Haskell that is acceptable
> to a Haskell compiler and convert just *that* to OCaml, you
> might be able to produce something useful much quicker.

I'm quite sure, actually, that implementing a usable sublanguage of
Haskell in this way would be a much larger project even than
translating core.  A usable sublanguage of Haskell would need a
parser, which could be a summer project all on its own if done well
with attention to errors and a sizeable test suite.  It would need an
implementation of lazy evaluation, which can be quite tricky to get
right in a thread-safe and efficient way.  It would need type checking
and type inference that's just different enough from OCaml that you'd
probably have to write a new HM+extensions type checker and inference
engine on your own, and *that* could again be far more than a summer
project on its own, if you plan to build something of production
quality.  It would need a whole host of little picky features that
involve various kinds of desugarings that represent man-decades worth
of work just on their own.

After a bit of thought, I'm pretty confident that the only reasonable
way to approach this project is to let an existing compiler tackle the
task of converting from Haskell proper to a smaller language that's
more reasonable to think about (despite the problems with lots of
primops... at least those are fairly mechanical).  Not because of all
the advanced language features or libraries, but just because
re-implementing the whole front end of a compiler for even a limited
but useful subset of Haskell is a ludicrously ambitious and risky
project for GSoC.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there arithmetic composition of functions?

2012-03-19 Thread Chris Smith
On Mon, Mar 19, 2012 at 7:16 PM, Richard O'Keefe  wrote:
> One problem with hooking functions into the Haskell numeric
> classes is right at the beginning:
>
>    class (Eq a, Show a) => Num a

This is true in base 4.4, but is no longer true in base 4.5.  Hence my
earlier comment about if you're willing to depend on a recent version
of base.  Effectively, this means requiring a recent GHC, since I'm
pretty sure base is not independently installable.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Google Summer of Code idea of project & application

2012-03-19 Thread Chris Smith
Damien Desfontaines  wrote:
> Thanks for your answer. I must admit that I do not really realize how much 
> work
> such a project represents. I will probably need the help of someone who is 
> more
> experienced than me to decide my timeline, and perhaps to restrict the final
> goal of my work (perhaps to a syntaxic subset of Haskell ?).

I'll be a bit blunt, in the interest of encouraging you to be
realistic before going too far down a doomed path.  I can't imagine
anyone at all thinking that a translator from a toy subset of Haskell
into a different language would be useful in any way whatsoever.  The
goal of GSoC is to find a well-defined project that's reasonable for a
summer, and is USEFUL to a language community.  Restricting the
project to some syntactic subset of Haskell is what people are
*afraid* will happen, and why you've gotten some not entirely
enthusiastic answers.  It just won't do us any good, especially when
there's no visible community of people ready to pick up the slack and
finish the project later.

One possible way out of this trap would be if, perhaps, the variant of
Haskell you picked were actually GHC's core language.  That could
actually have a lot of advantages, such as avoid parsing entirely,
removing type classes, laziness (I think... GHC did make the swap to
strict core, didn't it?), and many other advanced type system features
entirely, and being at least a potentially useful result that works
with arbitrary code and all commonly used Haskell language extensions
on top of the entire language.  At least you are back into plausible
territory.

It still seems far too ambitious for GSoC, though.  And I remain
unconvinced how useful it really is likely to be.  I'll grant there
are other people that care a lot more about ML than I do.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there arithmetic composition of functions?

2012-03-19 Thread Chris Smith
On Mar 19, 2012 11:40 AM, "Ozgur Akgun"  wrote:
> {-# LANGUAGE FlexibleInstances #-}
>
> instance Num a => Num (a -> a) where

You don't want (a -> a) there.  You want (b -> a).  There is nothing about
this that requires functions to come from a numeric type, much less the
same one.

-- 
Chris Smith
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there arithmetic composition of functions?

2012-03-19 Thread Chris Smith
If you are willing to depend on a recent version of base where Num is no
longer a subclass of Eq and Show, it is also fine to do this:

instance Num a => Num (r -> a) where
(f + g) x = f x + g x
fromInteger = const . fromInteger

and so on.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Google Summer of Code - Lock-free data structures

2012-03-18 Thread Chris Smith
On Mar 18, 2012 6:39 PM, "Florian Hartwig" 
wrote:
> GSoC stretches over 13 weeks. I would estimate that implementing a data
> structure, writing tests, benchmarks, documentation etc. should not take
more
> than 3 weeks (it is supposed to be full-time work, after all), which means
> that I could implement 4 of them in the time available and still have some
> slack.

Don't underestimate the time required for performance tuning, and be
careful to leave yourself learning time, unless you have already
extensively used ThreadScope, read GHC Core, and worked with low-level
strictness, unpacking, possibly even rewrite rules.  I suspect that the
measurable performance benefit from lockless data structures might be
tricky to tease out of the noise created by unintentional strictness or
unboxing issues.  And we'd be much happier with one or two really
production quality implementations than even six or seven at a student
project level.

-- 
Chris Smith
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Theoretical question: are side effects necessary?

2012-03-17 Thread Chris Smith
On Sat, Mar 17, 2012 at 5:41 AM, Donn Cave  wrote:
> I hope the answer is not that in computer science we regard all
> effects as side effects because the ideal computer program simply
> exists without consequence.

The answer is that side effects has become something of a figure of
speech, and now has a specialized meaning in programming languages.

When we're talking about different uses of the word "function" in
programming languages, side effects refer to any effect other than
evaluating to some result when applied to some argument.  For example,
in languages like C, printf takes some arguments, and returns an int.
When viewed as just a function, that's all there is to it; functions
exist to take arguments and produce return values.  But C extends the
definition of a function to include additional effects, like making
"Hello world" appear on a nearby computer screen.  Because those
effects are "aside from" the taking of arguments and returning of
values that functions exist to do, they are "side effects"... even
though in the specific case of printf, the effect is the main goal and
everyone ignores the return value, still for functions in general, any
effects outside of producing a resulting value from its arguments are
"side effects".

I suppose Haskell doesn't have "side effects" in that sense, since its
effectful actions aren't confused with functions.  (Well, except from
silly examples like "the CPU gives off heat" or FFI/unsafe stuff like
unsafePerformIO.)  So maybe we should ideally call them just
"effects".  But since so many other languages use functions to
describe effectful actions, the term has stuck.  So pretty much when
someone talks about side effects, even in Haskell, they means stateful
interaction with the world.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Theoretical question: are side effects necessary?

2012-03-16 Thread Chris Smith
On Fri, Mar 16, 2012 at 3:43 PM, serialhex  wrote:
> an interesting question emerges:  even though i may be able to implement an
> algorithm with O(f(n)) in Haskell, and write a program that is O(g(n)) <
> O(f(n)) in C++ or Java...  could Haskell be said to be more efficient if
> time spent programming / maintaining Haskell is << C++ or Java??

There are two unrelated issues: (a) the efficiency of algorithms
implementable in Haskell, and (b) the efficiency of programmers
working in Haskell.  It makes no sense to ask a question that
conflates the two.  If you're unsure which definition of "efficient"
you meant to ask about, then first you should stop to define the words
you're using, and then ask a well-defined question.

That being said, this question is even more moot given that real
Haskell, which involves the IO and ST monads, is certainly no
different from any other language in its optimal asymptotics.  Even if
you discount IO and ST, lazy evaluation alone *may* recover optimal
asymptotics in all cases... it's known that a pure *eager* language
can add a log factor to the best case sometimes, but my understanding
is that for all known examples where that happens, lazy evaluation
(which can be seen as a controlled benign mutation) is enough to
recover the optimal asymptotics.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Empty Input list

2012-03-12 Thread Chris Smith
On Mon, Mar 12, 2012 at 3:14 PM, Kevin Clees  wrote:
> Now my function looks like this:
>
> tmp:: [(Int, Int)] -> Int -> (Int, Int)
> tmp [] y = (0,0)
> tmp xs y = xs !! (y-1)

Just a warning that this will still crash if the list is non-empty by
the index exceeds the length.  That's because your function is no
longer recursive, so you only catch the case where the top-level list
is empty.  The drop function doesn't crash when dropping too many
elements though, so you can do this and get a non-recursive function
that's still total:

tmp :: [(Int,Int)] -> Int -> (Int, Int)
tmp xs y = case drop (y-1) xs of
[]     -> (0,0)
Just (x:_) -> x

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Empty Input list

2012-03-12 Thread Chris Smith
Oh, and just to point this out, the function you're writing already
exists in Data.List.  It's called (!!).  Well, except that it's zero
indexed, so your function is more like:

tmp xs y = xs !! (y-1)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Empty Input list

2012-03-12 Thread Chris Smith
On Mon, Mar 12, 2012 at 2:41 PM, Kevin Clees  wrote:
> what can I do, if a function gets an empty input list? I want, that it only 
> returns nothing.
> This is my source code:
>
> tmp:: [(Int, Int)] -> Int -> (Int, Int)
> tmp (x:xs) y
>        | y == 1 = x
>        | y > 1 = tmp xs (y-1)

It's not clear what you mean by "returns nothing" when the result is
(Int, Int)... there is no "nothing" value of that type.  But you can
add another equation to handle empty lists one you decide what to
return in that case.  For example, after (or before) the existing
equation, add:

tmp [] y = (-1, -1)

Or, you may want to use a Maybe type for the return... which would
mean there *is* a Nothing value you can return:

tmp:: [(Int, Int)] -> Int -> Maybe (Int, Int)
tmp (x:xs) y
       | y == 1 = Just x
       | y > 1  = tmp xs (y-1)
tmp [] y = Nothing

Does that help?
-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-12 Thread Chris Smith
On Mon, Mar 12, 2012 at 3:26 AM, Paolo Capriotti  wrote:
> I wouldn't say it's unsound, more like "not yet proved to be bug-free" :)
>
> Note that the latest master fixes all the issues found so far.

I was referring to the released version of pipes-core, for which
"known to be unsound" is an accurate description.  Good to hear that
you've got a fix coming, though.  Given the history here, maybe
working out the proofs of the category laws sooner rather than later
would be a good thing.  I'll have a look today and see if I can bang
out a proof of the category laws for your new code without ensure.

It will then be interesting to see how that compares to Gabriel's
approach, which at this point we've heard a bit about but I haven't
seen.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-11 Thread Chris Smith
On Sun, Mar 11, 2012 at 8:53 PM, Mario Blažević  wrote:
>    May I enquire what was the reason for the non-termination of idP? Why was
> it not defined as 'forP yield' instead? The following command runs the way I
> expected.

With pipes-core (which, recall, is known to be unsound... just felt
this is a good time for a reminder of that, even though I believe the
subset that adds tryAwait and forP to be sound), you do get both (pipe
id) and (forP yield).  So discover which is the true identity, we can
try:

idP >+> forP yield == forP yield
forP yield >+> idP == forP yield

Yep, looks like idP is still the identity.

Of course, the real reason (aside from the fact that you can check and
see) is that forP isn't definable at all in Gabriel's pipes package.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-11 Thread Chris Smith
On Sun, Mar 11, 2012 at 2:33 PM, Twan van Laarhoven  wrote:
> I think you should instead move unwaits in and out of the composition on the
> left side:
>
>    unawait x >> (p1 >+> p2) === (unawait x >> p1) >+> p2
>
> This makes idP a left-identity for (>+>), but not a right-identity, since
> you can't move unawaits in and out of p2.

Not sure how we got to the point of debating which of the category
laws pipes should break... messy business there.  I'm going to be in
favor of not breaking the laws at all.  The problem here is that
composition of chunked pipes requires agreement on the chunk type,
which gives the type-level guarantees you need that all chunked pipes
in a horizontal composition (by which I mean composition in the
category... I think you were calling that vertical?  no matter...)
share the same chunk type.  Paolo's pipes-extra does this by inventing
a newtype for chunked pipes, in which the input type appears in the
result as well.  There are probably some details to quibble with, but
I think the idea there is correct.  I don't like this idea of
implicitly just throwing away perfectly good data because the types
are wrong.  It shows up in the category-theoretic properties of the
package as a result, but it also shows up in the fact that you're
*throwing* *away* perfectly good data just because the type system
doesn't give you a place to put it!  What's become obvious from this
is that a (ChunkedPipe a b m r) can NOT be modelled correctly as a
(Pipe a b m r).

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-11 Thread Chris Smith
On Sun, Mar 11, 2012 at 11:22 AM, Mario Blažević  wrote:
>    No, idP does terminate once it consumes its input. Your idP >> p first
> reproduces the complete input, and then runs p with empty input.

This is just not true.  idP consumes input forever, and (idP >> p) =
idP, for all pipes p.

If it is composed with another pipe that terminates, then yes, the
*composite* pipe can terminate, so for example ((q >+> idP) >> p) may
actually do something with p.  But to get that effect, you need to
compose before the monadic bind... so for example (q >+> (idP >> p)) =
(q >+> idP) = q.  Yes, q can be exhausted, but when it is, idP will
await input, which will immediately terminate the (idP >> p) pipe,
producing the result from q, and ignoring p entirely.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-11 Thread Chris Smith
On Sun, Mar 11, 2012 at 10:30 AM, Mario Blažević  wrote:
>    It's difficult to say without having the implementation of both unawait
> and all the combinators in one package. I'll assume the following equations
> hold:

>   (p1 >> unawait x) >>> p2 = (p1 >>> p2) <* unawait x       -- this one
> tripped me up

I don't think this could reasonably hold.  For example, you'd expect
that for any p, idP >> p == idP since idP never terminates at all.
But then let p1 == idP, and you get something silly.  The issue is
with early termination: if p2 terminates first in the left hand side,
you don't want the unawait to occur.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-11 Thread Chris Smith
On Sun, Mar 11, 2012 at 7:09 AM, Paolo Capriotti  wrote:
> Someone actually implemented a variation of Pipes with unawait:
> https://github.com/duairc/pipes/blob/master/src/Control/Pipe/Common.hs
> (it's called 'unuse' there).
>
> I actually agree that it might break associativity or identity, but I
> don't have a counterexample in mind yet.

Indeed, on further thought, it looks like you'd run into problems here:

unawait x >> await == return x
(idP >+> unawait x) >> await == ???

The monadic operation is crucial there: without it, there's no way to
observe which side of idP knows about the unawait, so you can keep it
local and everything is fine... but throw in the Monad instance, and
those pipes are no longer equivalent because they act differently in
vertical composition.  There is no easy way to fix this with (idP ==
pipe id).  You could kludge the identity pipes and make that law hold,
and I *think* you'd even keep associativity in the process so you
would technically have a category again.  But this hints to me that
there is some *other* law you should expect to hold with regard to the
interaction of Category and Monad, and now that is being broken.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Summer of Code idea: Haskell Web Toolkit

2012-03-06 Thread Chris Smith
My first impression on this is that it seems a little vague, but
possibly promising.

I'd make it clearer that you plan to contribute to the existing UHC
stuff.  A first glance left me with the impression that you wanted to
re-implement a JavaScript back end, which would of course be a
non-starter as a GSoC project.  Since the actual proposal is to work
on the build system and libraries surrounding the existing UHC back
end, I'd maybe suggest revising the proposal to be clearer about that,
and more specific about what parts of the current UHC compiler, build
system, and libraries you propose working on.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are all monads functions?

2011-12-31 Thread Chris Smith
On Dec 31, 2011 8:19 AM, "Yves Parès"  wrote:
> -- The plain Maybe type
> data Maybe a = Just a | Nothing
>
> -- The MaybeMonad
> newtype MaybeMonad a = MM ( () -> Maybe a )
>
> That's what using Maybe as a monad semantically means, doesn't it?

I'd have to say no.  That Maybe types are isomorphic to functions from ()
is not related to their being monads... indeed it's true of all types.  I'm
not sure what meaning you see in the function, but I don't see anything of
monads in it.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-30 Thread Chris Smith
On Fri, 2011-12-30 at 23:16 +0200, Artyom Kazak wrote:
> Thus, your function “f” is a function indeed, which generates a list of  
> instructions to kernel, according to given number.

Not my function, but yes, f certainly appears to be a function.

Conal's concern is that if there is no possible denotational meaning for
values of IO types, then f can't be said to be a function, since its
results are not well-defined, as values.

This is a valid concern... assigning a meaning to values of IO types
necessarily involves some very unsatisfying hand-waving about
indeterminacy, since for example IO actions can distinguish between
bottoms that are considered equivalent in the denotational semantics of
pure values (you can catch a use of 'error', but you can't catch
non-termination).  Nevertheless, I'm satisfied that to the extent that
any such meaning can be assigned, f will be a valid function on
non-bottom values.  Not perfect, but close.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-30 Thread Chris Smith
On Fri, 2011-12-30 at 12:24 -0600, Gregg Reynolds wrote:
> No redefinition involved, just a narrowing of scope.  I assume that,
> since we are talking about computation, it is reasonable to limit  the
> discussion to the class of computable functions - which, by the way,
> are about as deeply embedded in orthodox mathematics as you can get,
> by way of recursion theory.  What would be the point of talking about
> non-computable functions for the semantics of a programming language?

Computability is just a distraction here.  The problem isn't whether
"getAnIntFromUser" is computable... it is whether it's a function at
all!  Even uncomputable functions are first and foremost functions, and
not being computable is just a property that they have.  Clearly this is
not a function at all.  It doesn't even have the general form of a
function: it has no input, so clearly it can't map each input value to a
specific output value.  Now, since it's not a function, it makes little
sense to even try to talk about whether it is computable or not (unless
you first define a notion of computability for something other than
functions).

If you want to talk about things that read values from the keyboard or
such, calling them "uncomputable" is confusing, since the issue isn't
really computability at all, but rather needing information from a
constantly changing external environment.  I suspect that at least some
people talking about "functions" are using the word to mean a
computational procedure, the sort of thing meant by the C programming
language by that word.  Uncomputable is a very poor word for that idea.


-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-30 Thread Chris Smith
On Fri, 2011-12-30 at 12:45 -0600, Gregg Reynolds wrote:
> I spent some time sketching out ideas for using random variables to provide
> definitions (or at least notation) for stuff like IO.  I'm not sure I could
> even find the notes now, but my recollection is that it seemed like a
> promising approach.  One advantage is that this eliminates the kind of 
> informal
> language (like "user input") that seems unavoidable in talking about IO.
> Instead of defining e.g. readChar or the like as an "action" that does
> something and returns an char (or however standard Haskell idiom puts it),
> you can just say that readChar is a random char variable and be done with
> it.  The notion of "doing an action" goes away.  The side-effect of actually
> reading the input or the like can be defined generically by saying that
> evaluating a random variable always has some side-effect; what specifically
> the side effect is does not matter.

Isn't this just another way of saying the same thing that's been said
already?  It's just that you're saying "random variable" instead of "I/O
action".  But you don't really mean random variable, because there's all
this stuff about side effects thrown in which certainly isn't part of
any idea of random variables that anyone else uses.  What you really
mean is, apparently, I/O action, and you're still left with all the
actual issues that have been discussed here, such as when two I/O
actions (aka random variables) are the same.

There is one difference, and it's that you're still using the term
"evaluation" to mean performing an action.  That's still a mistake.
Evaluation is an idea from operational semantics, and it has nothing to
do with performing effects.  The tying of effects to evaluation is
precisely why it's so hard to reason about programs in, say, C
denotationally, because once there is no such thing as an evaluation
process, modeling the meaning of terms becomes much more complex and
amounts to reinventing operational semantics in denotational clothing)\.

I'd submit that it is NOT an advantage to any approach that the notion
of doing an action goes away.  That notion is *precisely* what programs
are trying to accomplish, and obscuring it inside functions and
evaluation rather than having a way to talk about it is handicapping
yourself from a denotational perspective.  Rather, what would be an
advantage (but also rather hopeless) would be to define the notion of
doing an action more precisely.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-30 Thread Chris Smith

> time t:  f 42   (computational process implementing func application
> begins…)
> t+1:= 1
> t+2:  43   (… and ends)
> 
> 
> time t+3:  f 42
> t+4:   = 2
> t+5:  44
> 
> 
> Conclusion:  f 42 != f 42

That conclusion would only follow if the same IO action always produced
the same result when performed twice in a row.  That's obviously untrue,
so the conclusion doesn't follow.  What you've done is entirely
consistent with the fact that f 42 = f 42... it just demonstrates that
whatever f 42 is, it doesn't always produce the same result when you o
it twice.

What Conal is getting at is that we don't have a formal model of what an
IO action means.  Nevertheless, we know because f is a function, that
when it is applied twice to the same argument, the values we get back
(which are IO actions, NOT integers) are the same.

-- 
Chris Smith




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-30 Thread Chris Smith
On Fri, 2011-12-30 at 18:34 +0200, Artyom Kazak wrote:
> I wonder: can writing to memory be called a “computational effect”? If  
> yes, then every computation is impure. If no, then what’s the difference  
> between memory and hard drive?

The difference is that our operating systems draw an abstraction
boundary such that memory is private to a single program, while the hard
drive is shared between independent entities.  It's not the physical
distinction (which has long been blurred by virtual memory and caches
anyway), but the fact that they are on different sides of that
abstraction boundary.

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Level of Win32 GUI support in the Haskell platform

2011-12-29 Thread Chris Smith
On Fri, 2011-12-30 at 01:53 +, Steve Horne wrote:
> I've been for functions like GetMessage, TranslateMessage and 
> DispatchMessage in the Haskell Platform Win32 library - the usual 
> message loop stuff - and not finding them. Hoogle says "no results found".

I see them in the Win32 package.

http://hackage.haskell.org/packages/archive/Win32/2.2.1.0/doc/html/Graphics-Win32-Window.html#v:getMessage

> Alternatively, should I be doing dialog-based coding and leaving Haskell 
> to worry about message loops behind the scenes?

Various people recommend the gtk (aka Gtk2Hs) and wx packages for that.
I've never been able to get wx to build, but gtk works fine.  Others
(mostly those using macs) report the opposite.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-29 Thread Chris Smith
On Fri, 2011-12-30 at 02:40 +, Steve Horne wrote:
> Well, we're playing a semantic game anyway. Treating effects as
> first-class concepts in themselves is fine, but IMO doesn't make
> Haskell pure.

Okay, so if you agree that:

(a) IO actions are perfectly good values in the Haskell sense.
(b) They are first class (can passed to/returned from functions, etc.).
(c) When used as plain values, they have no special semantics.
(d) An IO action is no more a Haskell function than an Int is.
(e) All effects of a Haskell program are produced by the runtime system
performing the IO action called "main" (evaluating expressions lazily as
needed to do so), and NOT as a side-effect of the evaluation of
expressions.

Then we are completely in agreement on everything except whether the
word "pure" should apply to a programming language with those semantics.
Certainly the rest of the Haskell community describes this arrangement,
with effects being first-class values and performing those effects being
something done by the runtime system completely separate from evaluating
expressions, as Haskell being pure.  You can choose your terminology.

> I don't know the first thing about denotational semantics, but I do
> know this - if you place run-time behaviour outside the scope of your
> model of program semantics, that's just a limitation of your model. It
> doesn't change anything WRT the program itself - it only limits the
> understanding you can derive using that particular model.

The important bit about purity is that programs with I/O fit in to the
pure model just fine!  The pure model doesn't fully explain what the I/O
actions do, of course, but crucially, they also do not BREAK the pure
model.  It's a separation of concerns: I can figure out the higher-level
stuff, and when I need to know about the meaning of the values of
specific hairy and opaque data types like IO actions, or some complex
data structure, or whatever... well, then I can focus in and work out
the meaning of that bit when the time comes up.  The meanings of values
in those specific complex types doesn't affect anything except those
expressions that deal explicitly with that type.  THAT is why it's so
crucial that values of IO types are just ordinary values, not some kind
of magic thing with special evaluation rules tailored to them.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-29 Thread Chris Smith
On Fri, 2011-12-30 at 00:44 +, Steve Horne wrote:
> So, to resurrect an example from earlier...
> 
> f :: Int -> IO Int
> f = getAnIntFromTheUser >>= \i -> return (i+1)

Did you mean  f :: IO Int ?  If not, then I perhaps don't understand
your example, and your monad is not IO.  I'll continue assuming the
former.

> Are you claiming that the expression (i+1) is evaluated without knowing 
> the value of i?

I'm not sure what you mean by "evaluated" here.  I'd say it's in normal
form, but it has free variables so it's not even meaningful by itself;
it doesn't have a value in the first place.  On the other hand, the
larger expression, \i -> return (i+1), is closed *and* effectively in
normal form, so yes, I'd definitely say it is evaluated so far as that
word has any meaning at all.

> If not, at run-time your Haskell evaluates those expressions that 
> couldn't be fully evaluated at compile-time.

I certainly agree that the GHC runtime system, and any other Haskell
implementation's runtime system as well, evaluates expressions (some
representation of them anyway), and does lots of destructive updates to
boot.  This isn't at issue.  What is at issue is whether to shoehorn
those effects into the language semantics as a side-effect of evaluation
(or equivalently, force evaluation of expressions to be seen as an
effect -- when you only allow for one of these concepts, it's a silly
semantic game as to which name you call it by), or to treat effects as
semantically first-class concepts in their own right, different from the
simplification of expressions into values.

> If you do, we're back to my original model. The value returned by main 
> at compile-time is an AST-like structure wrapped in an IO monad 
> instance.

Here you're introducing implementation detail here that's rather
irrelevant to the semantics of the language.  Who knows whether compiler
and the runtime implementation build data structures corresponding to an
AST and run a reduction system on them, or use some other mechanism.
One could build implementations that do it many different ways.  In
fact, what most will do is generate machine code that directly performs
the desired effects and use closures with pointers to the generated
machine code.  But that's all beside the point.  If you need to know how
your compiler is implemented to answer questions about language
semantics, you've failed already.

Purity isn't about the RTS implementation, which is of course plenty
effectful and involves lots of destructive updates.  It's about the
language semantics.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-29 Thread Chris Smith
Sorry to cut most of this out, but I'm trying to focus on the central
point here.

On Thu, 2011-12-29 at 22:01 +, Steve Horne wrote:
> In pure functional terms, the result should be equivalent to a fully
> evaluated value - but putStrLn isn't pure. It cannot be fully
> evaluated until run-time.

And here it is, I think.  You're insisting on viewing the performing of
some effect as part of the evaluation of an expression, even though the
language is explicitly and intentionally designed not to conflate those
two ideas.  Effects do not happen as a side-effect of evaluating
expressions.  Instead they happen because you define the symbol 'main'
to be the effect that you want to perform, and then set the runtime
system to work on performing it by running your program.

Evaluation and effects are just not the same thing, and it makes no
sense to say something isn't "evaluated" just because the effect it
describes haven't been performed.  It's exactly that distinction -- the
refusal to conflate evaluation with performing effects -- that is
referred to when Haskell is called a pure language.

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell /Random generators

2011-12-29 Thread Chris Smith
On Thu, 2011-12-29 at 21:04 +, Steve Horne wrote:
> AFAIK there's no hidden unsafePerformIO sneaking any entropy in behind
> the scenes. Even if there was, it might be a legitimate reason for
> unsafePerformIO - random numbers are in principle non-deterministic,
> not determined by the current state of the outside world and
> which-you-evaluate-first should be irrelevant.

This is certainly not legitimate.  Anything that can't be memoized has
no business advertising itself as a function in Haskell.  This matters
quite a lot... programs might change from working to broken due to
something as trivial as inlining by the compiler (see the ugly  NOINLINE
annotations often used with unsafePerformIO tricks for initialization
code for an example).

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-29 Thread Chris Smith
On Thu, 2011-12-29 at 18:07 +, Steve Horne wrote:
> By definition, an intentional effect is a side-effect. To me, it's by 
> deceptive redefinition - and a lot of arguments rely on mixing 
> definitions - but nonetheless the jargon meaning is correct within 
> programming and has been for decades. It's not going to go away.
> 
> Basically, the jargon definition was coined by one of the pioneers of 
> function programming - he recognised a problem and needed a simple way 
> to describe it, but in some ways the choice of word is unfortunate.

I don't believe this is true.  "Side effect" refers to having a FUNCTION
-- that is, a map from input values to output values -- such that when
it is evaluated there is some effect in addition to computing the
resulting value from that map.  The phrase "side effect" refers to a
very specific confusion: namely, conflating the performing of effects
with computing the values of functions.

Haskell has no such things.  It's values of IO types are not functions
at all, and their effects do not occur as a side effect of evaluating a
function.  Kleisli arrows in the IO monad -- that is, functions whose
result type is an IO type, for example String -> IO () -- are common,
yes, but note that even then, the effect still doesn't occur as a side
effect of evaluating the function.  Evaluating the function just gives
you a specific value of the IO type, and performing the effect is still
a distinct step that is not the same thing as function evaluation.

> You can argue pedantry, but the pedantry must have a point - a 
> convenient word redefinition will not make your bugs go away. People 
> tried that with "it's not a bug it's a feature" and no-one was impressed.

This most certainly has a point.  The point is that Haskell being a pure
language allows you to reason more fully about Haskell programs using
basic language features like functions and variables.  Yes, since
Haskell is sufficiently powerful, it's possible to build more and more
complicated constructs that are again harder to reason about... but even
when you do so, you end up using the core Haskell language to talk
*about* such constructs... you retain the ability to get your hands on
them and discuss them directly and give them names, not mere as side
aspects of syntactic forms as they manifest themselves in impure
languages.

That is the point of what people are saying here (pedantry or not is a
matter of your taste); it's directly relevant to day to day programming
in Haskell.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-29 Thread Chris Smith
Entering tutorial mode here...

On Thu, 2011-12-29 at 10:04 -0800, Donn Cave wrote:
> We can talk endlessly about what your external/execution results 
> might be for some IO action, but at the formulaic level of a Haskell
> program it's a simple function value, e.g., IO Int.

Not to nitpick, but I'm unsure what you might mean by "function value"
there.  An (IO Int) is not a function value: there is no function
involved at all.  I think the word function is causing some confusion,
so I'll avoid calling things functions when they aren't.

In answer to the original question, the mental shift that several people
are getting at here is this: a value of the type (IO Int) is itself a
meaningful thing to get your hands on and manipulate.  IO isn't just
some annotation you have to throw in to delineate where your non-pure
stuff is or something like that; it's a type constructor, and IO types
have values, which are just as real and meaningful as any other value in
the system.  For example,

Type: Int
Typical Values: 5, or 6, or -11

Type: IO Int
Typical Values: (choosing a random number from 1 to 10 with the default
random number generator), or (doing nothing and always returning 5), or
(writing "hello" to temp.txt in the current working directory and
returning the number of bytes written)

These are PURE values... they do NOT have side effects.  Perhaps they
"describe" side effects in a sense, but that's a matter of how you
interpret them; it doesn't change the fact that they play the role of
ordinary values in Haskell.  There are no special evaluation rules for
them.

Just like with any other type, you might then consider what operations
you might want on values of IO types.  For example, the operations you
might want on Int are addition, multiplication, etc.  It turns out that
there is one major operation you tend to want on IO types: combine two
of them by doing them in turn, where what you do second might depend on
the result of what you do first.  So we provide that operation on values
of IO types... it's just an ordinary function, which happens to go by
the name (>>=).  That's completely analogous to, say, (+) for Int...
it's just a pure function that takes two parameters, and produces a
result.  Just like (+), if you apply (>>=) to the same two parameters,
you'll always get the same value (of an IO type) as a result.

Now, of course, behind the scenes we're using these things to describe
effectful actions... which is fine!  In fact, our entire goal in writing
any computer program in any language is *precisely* to describe an
effectful action, namely what we'd like to see happen when our program
is run.  There's nothing wrong with that... when Haskell is described as
pure, what is meant by that is that is lets us get our hands on these
things directly, manipulate them by using functions to construct more
such things, in exactly the same way we'd do with numbers and
arithmetic.  This is a manifestly different choice from other languages
where those basic manipulations even on the simple types are pushed into
the more nebulous realm of effectful actions instead.

If you wanted to make a more compelling argument that Haskell is not
"pure", you should look at termination and exceptions from pure code.
This is a far more difficult kind of impurity to explain away: we do it,
by introducing a special families of values (one per type) called
"bottom" or _|_, but then we also have to introduce some special-purpose
rules about functions that operate on that value... an arguably clearer
way to understand non-termination is as a side-effect that Haskell does
NOT isolate in the type system.  But that's for another time.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Time zones and IO

2011-11-06 Thread Chris Smith
On Sun, 2011-11-06 at 17:25 -0500, Heller Time wrote:
>  unless the machine running the program using time-recurrence was traveling 
> across timezones (and the system was updating that fact)

Note that this is *not* an unusual situation at all these days.  DST was
already mentioned, but also note that more and more software is running
on mobile devices that do frequently update their time zone information.
Unpredictably breaking code when this occurs is going to get a lot worse
when people starting building Haskell for their Android and iOS phones,
as we're very very close to seeing happen.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Cloud and Closures

2011-10-01 Thread Chris Smith
On Sat, 2011-10-01 at 02:16 -0700, Fred Smith wrote:
> In seems to me that in cloud haskell library the function's closures
> can be computed only with top-level ones, is it possible to compute
> the closure at runtime of any function and to send it to another host?

The current rule is a bit overly restrictive, true.  But I just wanted
to point out that there is a good reason for having *some* restriction
in place.  There are certain types that should *not* be sent to other
processes or nodes.  Take MVar, for example.  It's not clear what it
would mean to send an MVar over a channel to a different node.

By extension, allowing you to send arbitrary functions not defined at
the top level is also problematic, because such functions might close
over references to MVars, making them essentially a vehicle for
smuggling MVars to new nodes.  And since the types of free variables
don't occur in the types of terms, there is no straight-forward Haskell
type signature that can express this limitation.  So the compiler is
obliged to specify some kind of sufficient restrictions to prevent you
from sending functions that close over MVar or other node-specific
types.

For now, you'll have to move all of your functions to the top level.
Hopefully, in the future, some relaxation of those rules can occur.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considerednotentirelygreat?

2011-09-27 Thread Chris Smith
On Tue, 2011-09-27 at 12:36 -0400, Steve Schafer wrote:
> [0.1,0.2..0.5] isn't the problem. The problem is coming up with
> something that not only works for [0.1,0.2..0.5], but also works for
> [0.1,0.2..1234567890.5].
> 
> A good rule of thumb: For every proposal that purports to eliminate
> having to explicitly take into consideration the limited precision of
> floating-point representations, there exists a trivial example that
> breaks that proposal.

If by "trivial" you mean easy to construct, sure.  But if you mean
typical, that's overstating the case by quite a bit.

There are plenty of perfectly good uses for floating point numbers, as
long as you keep in mind a few simple rules:

1. Don't expect any exact answers.

2. Don't add or subtract values of vastly different magnitudes if you
expect any kind of accuracy in the results.

3. When you do depend on discrete answers (like with the Ord functions)
you assume an obligation to check that the function you're computing is
continuous around the boundary.

If you can't follow these rules, you probably should find a different
type.  But there's a very large class of computing tasks where these
rules are not a problem at all.  In your example, you're breaking rule
#2.  It's certainly not a typical case to be adding numbers like 0.1 to
numbers in the billions, and if you're doing that, you should know in
advance that an approximate type is risky.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered notentirelygreat?

2011-09-27 Thread Chris Smith
On Tue, 2011-09-27 at 09:47 -0700, Iavor Diatchki wrote:
> As Ross pointed out in a previous e-mail the instance for Rationals is
> also broken:
> 
> > last (map fromRational [1,3 .. 20])
> > 21.0

Sure, for Int, Rational, Integer, etc., frankly I'd be in favor of a
runtime error when the last value isn't in the list.  You don't need
approximate behavior for those types, and if you really mean
takeWhile (<= 20) [1,3..], then you should probably write that, rather
than a list range notation that doesn't mean the same thing.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considerednotentirelygreat?

2011-09-27 Thread Chris Smith
On Tue, 2011-09-27 at 09:23 -0700, Donn Cave wrote:
> I think it's more than reasonable to expect
> 
>   [0.1,0.2..0.5] == [0.1,0.2,0.3,0.4,0.5]
> 
> and that would make everyone happy, wouldn't it?

But what's the justification for that?  It *only* makes sense because
you used short decimal literals.  If the example were:

let a = someComplicatedCalculation
b = otherComplicatedCalculation
c = thirdComplicatedCalculation
in  [a, b .. c]

then it would be far less reasonable to expect the notation to fudge the
numbers in favor of obtaining short decimal representations, which is
essentially what you're asking for.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered notentirelygreat?

2011-09-27 Thread Chris Smith
On Tue, 2011-09-27 at 00:29 -0700, Donn Cave wrote:
> It doesn't appear to me to be a technicality about the representation -
> the value we're talking about excluding is not just represented as
> greater than 0.3, it is greater than 0.3 when applied in computations.

Sure, the exact value is greater than 0.3.  But to *predict* that, you
have to know quite a bit about the technicalities of how floating point
values are represented.  For example, you need to know that 0.1 has no
exact representation as a floating point number, and that the closest
approximation is greater than the exact real number 0.1, and that the
difference is great enough that adding it twice adds up to a full ulp of
error.

> For example you can subtract 0.3 and get a nonzero value (5.55e-17.)

Again, if you're working with floating point numbers and your program
behaves in a significantly different way depending on whether you get 0
or 5.55e-17 as a result, then you're doing something wrong.

> The disappointment with iterative addition is not that
> its fifth value [should be] omitted because it's "technically" greater,
> it's that range generation via iterative addition does not yield the
> values I specified.

I certainly don't agree that wanting the exact value from a floating
point type is a reasonable expectation.  The *only* way to recover those
results is to do the math with the decimal or rational values instead of
floating point numbers.  You'll get the rounding error from floating
point regardless of how you do the computation, because the interval
just isn't really 0.1.  The difference between those numbers is larger
than 0.1, and when you step by that interval, you won't hit 0.5.

You could calculate the entire range using Rational and then convert
each individual value after the fact.  That doesn't seem like a
reasonable default, since it has a runtime performance cost.  Of course
you're welcome to do it when that's what you need.

> last ([0.1, 0.2 .. 0.5]) == 0.5
False

> last (map fromRational [0.1, 0.2 .. 0.5]) == 0.5
True

-- 
Chris



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirelygreat?

2011-09-26 Thread Chris Smith
On Mon, 2011-09-26 at 19:54 -0700, Donn Cave wrote:
> Pardon the questions from the gallery, but ... I can sure see that
> 0.3 shouldn't be included in the result by overshooting the limit
> (i.e., 0.30004), and the above expectations about
> [0,2..9] are obvious enough, but now I have to ask about [0,2..8] -
> would you not expect 8 in the result?  Or is it not an upper bound?

Donn,

[0, 2 .. 8] should be fine no matter the type, because integers of those
sizes are all exactly representable in all of the types involved.  The
reason for the example with a step size of 0.1 is that 0.1 is actually
an infinitely repeating number in base 2 (because the denominator has a
prime factor of 5).  So actually *none* of the exact real numbers 0.1,
0.2, or 0.3 are representable with floating point types.  The
corresponding literals actually refer to real numbers that are slightly
off from those.

Furthermore, because the step size is not *exactly* 0.1, when it's added
repeatedly in the sequence, the result has some (very small) drift due
to repeated rounding error... just enough that by the time you get in
the vacinity of 0.3, the corresponding value in the sequence is actually
*neither* the rational number 0.3, *nor* the floating point literal 0.3.
Instead, it's one ulp larger than the floating point literal because of
that drift.

So there are two perspectives here.  One is that we should think in
terms of exact values of the type Float, which means we'd want to
exclude it, because it's larger than the top end of the range.  The
other is that we should think of approximate values of real numbers, in
which case it's best to pick the endpoint closest to the stated one, to
correct for what's obviously unintended drift due to rounding.

So that's what this is about: do we think of Float as an approximate
real number type, or as an exact type with specific values.  If the
latter, then "of course" you exclude the value that's larger than the
upper range.  If the former, then using comparison operators like '<'
implies a proof obligation that the result of the computation remains
stable (loosely speaking, the function continuous) at that boundary
despite small rounding error in either direction.  In that case,
creating a language feature where, in the *normal* case of listing the
last value you expect in the list, 0.3 (as an approximate real number)
is excluded from this list just because of technicalities about the
representation is an un-useful implementation, to say the least, and
makes it far more difficult to satisfy that proof obligation.

Personally, I see floating point values as approximate real numbers.
Anything else in unrealistic: the *fact* of the matter is that no one is
reasoning about ulps or exact rational values when they use Float and
Double.  In practice, even hardware implementations of some floating
point functions have indeterminate results in the exact sense.  Often,
the guarantee provided by an FPU is that the result will be within one
ulp of the correct answer, which means the exact value of the answer
isn't even known!  So, should we put all floating point calculations in
the IO monad because they aren't pure functions?  Or can we admit that
people use floating point to approximate reals and accept the looser
reasoning?

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-26 Thread Chris Smith
On Mon, 2011-09-26 at 18:52 +0300, Yitzchak Gale wrote:
> Chris Smith wrote:
> > class Ord a => Range a where...
> 
> Before adding a completely new Range class, I would suggest
> considering Paul Johnson's Ranged-sets package:

Well, my goal was to try to find a minimal and simple answer that
doesn't break anything or add more complexity.  So I don't personally
find the idea of adding multiple *more* type classes appealing.

In any case, it doesn't make much difference either way.  It's clear
that no one is going to be satisfied here, so there's really no point in
making any change.  In fact, if this conversation leads to changes, it
looks like it will just break a bunch of code and make Haskell harder to
use.

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-26 Thread Chris Smith
On Mon, 2011-09-26 at 18:53 +0200, Lennart Augustsson wrote:
> If you do [0.1, 0.2 .. 0.3] it should leave out 0.3.  This is floating
> point numbers and if you don't understand them, then don't use them.
> The current behaviour of .. for floating point is totally broken, IMO.

I'm curious, do you have even a single example of when the current
behavior doesn't do what you really wanted anyway?  Why would you write
an upper bound of 0.3 on a list if you don't expect that to be included
in the result?  I understand that you can build surprising examples with
stuff that no one would really write... but when would you really *want*
the behavior that pretends floating point numbers are an exact type and
splits hairs?

I'd suggest that if you write code that depends on whether 0.1 + 0.1 +
0.1 <= 0.3, for any reason other than to demonstrate rounding error,
you're writing broken code.  So I don't understand the proposal to
change this notation to create a bunch of extra broken code.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-25 Thread Chris Smith
> Don't mix range and arithmetic sequences. I want arithmetic sequences for 
> Double, Float and Rational, but not range.
> (For Float and Double one could implement range [all values between the 
> given bounds, in increasing order, would be the desired/expected semantics 
> for that, I think?],

Okay, fine, I tried.  Obviously, I'm opposed to just flat removing
features from the language, especially when they are so useful that they
are being used without any difficulty at all by the 12 year olds I'm
teaching right now.

Someone (sorry, not me) should really write up the proposed change to
Ord for Float/Double and shepherd them through the haskell-prime
process.  That one shouldn't even be controversial; there's already an
isNaN people should be using for NaN checks, and any code relying on the
current behavior is for all intents and purposes broken anyway.  The
only question is whether to add the new methods to RealFloat (breaking
on the bizarre off chance that someone has written a nonstandard
RealFloat instance), or add a new IEEE type class.

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-25 Thread Chris Smith
Would it be an accurate summary of this thread that people are asking
for (not including quibbles about naming and a few types):

class Ord a => Enum a where
succ :: a -> a
pred :: a -> a
fromEnum :: a -> Int(eger)
toEnum :: Int(eger) -> a
-- No instance for Float/Double

class Ord a => Range a where
rangeFromTo :: a -> a -> [a] -- subsumes Ix.range / Enum.enumFromTo
rangeFromThenTo :: a -> a -> a -> [a]
inRange   :: (a, a) -> a -> Bool
-- Does have instances for Float/Double.  List ranges desugar to this.
-- Also has instances for tuples

class Range a => InfiniteRange a where -- [1]
rangeFrom :: a -> [a]
rangeFromThen :: a -> a -> [a]
-- Has instances for Float/Double
-- No instances for tuples

class Range a => Ix a where
index :: (a, a) -> a -> Int
rangeSize :: (a, a) -> Int

-- Again no instances for Float/Double.  Having an instance here implies
-- that the rangeFrom* are "complete", containing all 'inRange' values

class (RealFrac a, Floating a) => RealFloat a where
... -- existing stuff
(.<.), (.<=.), (.>.), (.>=.), (.==.) :: a -> a -> Bool
-- these are IEEE semantics when applicable

instance Ord Float where ... -- real Ord instance where NaN has a place

There would be the obvious properties stated for types that are
instances of both Enum and Range, but this allows for non-Enum types to
still be Range instances.

If there's general agreement on this, then we at least have a proposal,
and one that doesn't massively complicate the existing system.  The next
step, I suppose would be to implement it in an AltPrelude module and
(sadly, since Enum is changing meaning) a trivial GHC language
extension.  Then the real hard work of convincing more people to use it
would start.  If that succeeds, the next hard work would be finding a
compatible way to make the transition...

I'm not happy with InfiniteRange, but I imagine the alternative (runtime
errors) won't be popular in the present crowd.

-- 
Chris



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-22 Thread Chris Smith
On Fri, 2011-09-23 at 11:02 +1200, Richard O'Keefe wrote:
> I do think that '..' syntax for Float and Double could be useful,
> but the actual definition is such that, well, words fail me.
> [1.0..3.5] => [1.0,2.0,3.0,4.0]   Why did anyone ever think
> _that_ was a good idea?

In case you meant that as a question, the reason is this:

Prelude> [0.1, 0.2 .. 0.3]
[0.1,0.2,0.30004]

Because of rounding error, an implementation that meets your proposed
law would have left out 0.3 from that sequence, when of course it was
intended to be there.  This is messy for the properties you want to
state, but it's almost surely the right thing to do in practice.  If the
list is longer, then the most likely way to get it right is to follow
the behavior as currently specified.  Of course it's messy, but the
world is a messy place, especially when it comes to floating point
arithmetic.

If you can clear this up with a better explanation of the properties,
great!  But if you can't, then we ought to reject the kind of thinking
that would remove useful behavior when it doesn't fit some theoretical
properties that looked nice until you consider the edge cases.

-- 
Chris



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-20 Thread Chris Smith
On Wed, 2011-09-21 at 00:04 +0200, Ketil Malde wrote:
> > If Haskell defined list syntax in terms of something that's not called
> > Enum, that would be fine.  Renaming is never all that big a deal.  But
> > the list sugar is a big deal, and I don't think there's any point at all
> > in leaving the list sugar associated with something as minor as building
> > a representation of the inaccuracy of your approximations.
> 
> I must admit I don't understand this comment.  If the fixpoint library
> wants to provide the functionality (producing all values between two
> points), and can't/shouldn't use Enum, surely it must provide a
> different function, and let go of the list sugar?

Sorry to be unclear.  I mean that instead of removing a useful instance,
if people find the use of Enum for Float to be objectionable, then
perhaps (via language extensions, deprecation, all the usual backward
compatibility slow-change stuff) the desugaring of list ranges should be
changed to not use something with a name you'd object to, rather than
just removing the feature.

In any case, as long as Enum *is* the backing for list desugaring, it
seems like a mistake to define instances that are completely unuseful
for list desugaring.

-- 
Chris



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-20 Thread Chris Smith
On Tue, 2011-09-20 at 17:28 -0400, Casey McCann wrote:
> Since removing the instances entirely is
> probably not a popular idea, the least broken solution would be to
> define NaN as equal to itself and less than everything else, thus
> accepting the reality of Ord as the "meaningless arbitrary total
> order" type class I suggested above and leaving Haskell bereft of any
> generic semantic comparisons whatsoever. Ah, pragmatism.

There's nothing *wrong* with pragmatism, but in any case, we seem to
agree on this.  As I said earlier, we ought to impose a (rather
arbitrary) total order on Float and Double, and then offer comparison
with IEEE semantics as a separate set of functions when they are needed.
(I wonder if Ocaml-style (<.) and (>.) and such are used anywhere.)

> It's not clear that Enum, as it stands, actually means anything coherent at 
> all.

It's clear to me that Enum for Float means something coherent.  If
you're looking for a meaning independent of the instance, I'd argue you
ought to be surprised if you find one, not the other way around.  Why
not look for a meaning for Monoid that's independent of the instance?
There isn't one; instead, there are some rules that the instance is
expected to satisfy, but there are plenty of types that have many
possible Monoid instances, and we pick one and leave you to use newtypes
if you wanted a different one.

I'm not saying that Enum must be left exactly as is... but I *am* saying
that the ability to use floating point types in list ranges is important
enough to save.  For all its faults, at least the current language can
do that.  When the solution to the corner cases is to remove a pervasive
and extremely useful feature, I start to get worried!

Yes, I could see (somehow in small steps that preserve backward
compatibility for reasonable periods) building some kind of clearer
relationship between Ord, Enum, and Ix, possibly separating Enum from a
new Range class that represents the desugaring of list ranges, or
whatever... but this idea of "I don't think this expresses a deep
underlying relationship independent of type, so let's just delete it
without regard to how useful it is" is very short-sighted.

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-20 Thread Chris Smith
On Tue, 2011-09-20 at 16:22 -0400, Jake McArthur wrote:
> This makes me wonder if maybe the reason this discussion is happening
> at all is that we don't have a well-defined meaning for what Enum
> *is*.

Certainly, we don't have a type-independent definition for Enum.  I'm
not sure whether it's possible to obtain that or not.  Keep in mind that
for plenty of common type classes, this is not possible.  For example,
consider Monoid.  By writing a monoid instance, you're making the rather
ridiculous claim that you are specifying *the* way to define a Monoid on
that type, when of course there are more than one, and there's no formal
way to say what that Monoid should do independent of whatever operation
happens to be most commonly used for combining values of that type.
Same for many Functor or Applicative or Monad instances.

So yes, we don't know how to define a type-independent meaning... but
I'm completely sure that it would be a mistake to start dropping useful
things from Haskell just because we're unable to put our finger on a
formalism for describing them precisely without assigning type-dependent
meanings.

> What exactly does Enum enumerate?

I'd say that just like the case with Monoid, Enum means whatever it's
most useful for it to mean with respect to some particular type.  We
could probably be a little more specific with laws that we expect the
instance to follow, such as:

enumFromTo a b == enumFromThenTo a (succ a) b

and so on.  But it's not always possible to define a notion

> To me, the list syntax sugar looks like I'm specifying bounds, so it
> makes sense to include all values within those bounds (and honestly,
> having instances for Float, Double, and Rational sounds like a mistake,
> given this)

It's unclear to me how you get from (has bounds) to (must include
*everything* in those bounds).  I'd definitely agree that for instances
of Enum that are also instances of Ord, you'd expect (all (>= a) [a ..])
and related properties.

> What does it mean to you? What makes the
> current behavior more useful than the proposed behavior?

[...]

> You say we've seen that this behavior is useful in this thread, but
> I'm not sure what it is we have seen.

More specifically, what I said is that we've seen that list range
notation is useful in some situations where a complete enumeration of
possible values is not useful, or where such an enumeration isn't the
same one we'd hope for out of a list range.  What I meant was that we've
noticed that the current behavior on Float is incompatible with being a
complete enumeration.  I'm taking it for granted that the current
behavior on Float is useful; I honestly don't see how you could argue
with that.  People use it all the time; I used it just this morning.  Of
course it's useful.

-- 
Chris



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-20 Thread Chris Smith
On Tue, 2011-09-20 at 15:28 -0400, Casey McCann wrote:
> I actually think the brokenness of Ord for floating point values is
> worse in many ways, as demonstrated by the ability to insert a value
> into a Data.Set.Set and have other values "disappear" from the set as
> a result.

Definitely Ord is worse.  I'd very much like to see the Ord instance for
Float and Double abandon the IEEE semantics and just put "NaN" somewhere
in there -- doesn't matter where -- and provide new functions for the
IEEE semantics.

As for Enum, if someone were to want a type class to represent an
enumeration of all the values of a type, then such a thing is reasonable
to want.  Maybe you can even reasonably wish it were called Enum.  But
it would be the *wrong* thing to use as a desugaring for list range
notation.  List ranges are very unlikely to be useful or even meaningful
for most such enumerations (what is [ Red, Green .. LightPurple]?); and
conversely, as we've seen in this thread, list ranges *are* useful in
situations where they are not a suitable way of enumerating all values
of a type.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-20 Thread Chris Smith
On Tue, 2011-09-20 at 17:39 +0200, Ketil Malde wrote:
> You forgot "confusing"?

I didn't forget it; whether it's confusing or not depends on the
perspective you're coming from.  The kids in my beginning programming
class are using Enum (via the list syntactic sugar) on Float and don't
get confused... so perhaps we ought to ask what the cause of the
confusion is.

> Expecting Enum to enumerate all inhabitants of
> a type seems very reasonable to me, and seems to hold for all
> non-floating point types.

Floating point (and fixed point, for that matter) types approximate real
numbers, which of course have no possible enumeration of all values.
Even if you want to say they approximate rational numbers, it remains
the case that the rationals have no linearly ordered enumeration of all
their values, which would be needed to be compatible with the
approximation.  It seems to me particularly pointless to define an Enum
instance that focuses on, above all else, the inaccuracy of that
approximation.

Incidentally, you can add Rational to the list of types that define Enum
that way and don't enumerate all possible values.  And the Haskell
Report gives a generic implementation of Enum in terms of Num, which
behaves that way.  Perhaps I was understating the case in saying the
behavior was established but undocumented; rather, it's explicitly
documented in the Haskell Report, just not as a requirement for
programmer-defined instances of the Num class (because that's not the
job of the Report).

> Or just avoid Enum, and define "range" or something similar instead.

If Haskell defined list syntax in terms of something that's not called
Enum, that would be fine.  Renaming is never all that big a deal.  But
the list sugar is a big deal, and I don't think there's any point at all
in leaving the list sugar associated with something as minor as building
a representation of the inaccuracy of your approximations.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-20 Thread Chris Smith
On Mon, 2011-09-19 at 22:09 -0700, Evan Laforge wrote:
> Then I tried switching to a fixed point format, and discovered my
> mistake.  Enum is supposed to enumerate every value between the two
> points, and the result is memory exhaustion.

I'm not sure where you read that "Enum is supposed to enumerate every
value between the two points".  It's not in the API documentation, and I
don't see it in the Haskell Report.

The better way to look at this is that the notion of `succ` and `pred`
is dependent on the type, much like `mappend` has no particular meaning
until a Monoid instance is given for the type.  It's fairly well
established, though undocumented, that Num types ought to have succ =
(+1) and pred = (subtract 1), so if your fixed point type doesn't do
that, I'd suggest it is the problematic part of this.

It would be a shame if we lost an occasionally useful and easy to read
language feature because of concerns with the disadvantages of some
hypothetical bad implementation.

> Is there any support for the idea of removing Enum instances for
> floating point numbers?

I certainly hope not.  Instead, perhaps the issue should be brought up
with the fixed-point number library you're using, and they could fix
their Enum instance to be more helpful.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] regex-applicative library needs your help! (algorithmic challenge)

2011-09-17 Thread Chris Smith
On Sat, 2011-09-17 at 22:11 -0400, Anton Tayanovskyy wrote:
> By the way, can Haskell have a type that admits regular expression and
> only those? I mostly do ML these days, so trying to write up a regex
> types in Haskell I was unpleasantly surprised to discover that there
> are all sorts of exotic terms inhabiting it, which I did not have to
> worry about in ML. Besides `undefined` you can have for terms that try
> to define a grammar with nested parentheses.

I'm not sure that I understand exactly what the question is... but if
you want to recover the set of values in ML types, you need only add an
'!' before each of the fields of each of your constructors, marking them
as strict fields.  Sadly (but unsurprisingly), this does require using a
different version of lists and other fundamental types as well.

You'll be left with bottom, of course, but in strict-language thinking,
that's nothing but an acknowledgement that a function returning that
type may diverge... something you can't exclude in Haskell *or* ML, it's
just that ML has a different natural way of thinking about it.  That way
lies Agda and Coq.  So you get the next-closest thing: a so-called flat
type, whose values are bottom, and a single layer above it in the
information ordering.

If I've misunderstood the question, my apologies... I haven't actually
been reading this thread.

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] VirtuaHac - online Haskell hacathon

2011-09-16 Thread Chris Smith
On Fri, 2011-09-16 at 23:14 +0300, Roman Cheplyaka wrote:
> Sounds good, but what is exactly the role of Google+ here?
> 
> I don't have an account and was not planning to get one, so I wonder how
> bad my experience will be without it.

Google+ is being used entirely for the Hangouts feature, which is their
system for small-group video chat.  Using Hangouts to simulate an
in-person face-to-face group interaction was the original idea that
motivated trying this out, so I see it as rather fundamental.  But, I
could be wrong.  Certainly, anyone is welcome to watch the wiki and hack
on the same projects at the same time.

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] VirtuaHac - online Haskell hacathon

2011-09-16 Thread Chris Smith
Overview


VirtuaHac is a Haskell hacathon being planned using Google+ and a Wiki.
The rules will be that if you have a project you'd like to participate
in, then:

1. Create a darcs or git repository.
2. Post a Hangout in Google+.
3. Add a name, brief description, and links to the preceding two items
to the Wiki.

This way, we hope to recreate the idea of a Hacathon, but without the
travel expenses or inconvenience.

What Next?
==

I'm taking a poll of availability, at
https://docs.google.com/spreadsheet/viewform?formkey=dDZ3Tjh1RzVDeHdRYWhNTkNPNUhpNkE6MQ

The poll will remain open for the next week, after which I'll choose a
date so we can all start planning.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Tupling functions

2011-09-13 Thread Chris Smith
On Wed, 2011-09-14 at 13:56 +1200, Richard O'Keefe wrote:
> I don't *expect* to implement anything just once.  I am perfectly
> happy writing as many instance declarations as I have tuple sizes
> that I care about.

Ah, okay... then sure, you can do this:

class Tuple a b c | a b -> c where
tuple :: a -> b -> c

instance Tuple (a -> b, a -> c) a (b,c) where
tuple (f,g) x = (f x, g x)

and so on...  You'll need fundeps (or type families if you prefer to
write it that way), and probably at least flexible and/or overlapping
instances, too, but of course GHC will tell you about those.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Tupling functions

2011-09-13 Thread Chris Smith
On Wed, 2011-09-14 at 13:35 +1200, Richard O'Keefe wrote:
> I would like to have
> 
>   tuple (f1,f2)   x = (f1 x, f2 x)
>   tuple (f1,f2,f3)x = (f1 x, f2 x, f3 x)
>   tuple (f1,f2,f3,f4) x = (f1 x, f2 x, f3 x, f4 x)
>   ...
> 
> I'm aware of Control.Arrow and the &&& combinator, and I can use that
> instead, but f1 &&& f2 &&& f3 doesn't have _exactly_ the type I want.
> 
> What should I do?

There is no polymorphism across tuple structures, so if you absolutely
*must* have n-tuples instead of nested 2-tuples, then you just need to
implement the new functions as needed.  You can't implement that only
once.  Plenty of places in base do this, especially for instances.

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Finger Tree without using Monoid

2011-09-01 Thread Chris Smith
I'm curious why you wanted a finger tree without the Monoid instance...
if you need a different Monoid instance, you can probably simplify your
code significantly by using a newtype wrapper around Seq rather than
re-implementing it.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Off-topic: Mathematics and modesty

2011-08-30 Thread Chris Smith
On Tue, 2011-08-30 at 20:58 +0200, Jerzy Karczmarczuk wrote:
> With all my respect:
> I think I know several mathematicians who learning that a person asking 
> for help begins with trying to distinguish  between knowledgeable, and 
> those who just think they are, will simply - to say it politely - refuse 
> to engage.

I don't agree with this.  It's the most natural thing in the world to
listen to an answer and then try to figure out whether the speaker knows
what they are talking about or not.  Those who expect us to forego that
step aren't really engaged in mathematics any more.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC API question

2011-08-28 Thread Chris Smith
On Sun, 2011-08-28 at 17:47 +0100, Thomas Schilling wrote:
> I don't think you can link GHCi with binaries compiled in profiling
> mode.  You'll have to build an executable.

Okay... sorry to be obtuse, but what exactly does this mean?  I'm not
using GHCi at all: I *am* in an executable built with profiling info.

I'm doing this:

dflags <- GHC.getSessionDynFlags
let dflags' = dflags {
GHC.ghcMode = GHC.CompManager,
GHC.ghcLink = GHC.LinkInMemory,
GHC.hscTarget = GHC.HscAsm,
GHC.optLevel = 2,
GHC.safeHaskell = GHC.Sf_Safe,
GHC.packageFlags = [GHC.TrustPackage "gloss" ],
GHC.log_action = addErrorTo codeErrors
}
GHC.setSessionDynFlags dflags'
target <- GHC.guessTarget filename Nothing
GHC.setTargets [target]
r  <- fmap GHC.succeeded (GHC.load GHC.LoadAllTargets)

and then if r is true:

mods <- GHC.getModuleGraph
let mainMod = GHC.ms_mod (head mods)
Just mi <- GHC.getModuleInfo mainMod
let tyThings = GHC.modInfoTyThings mi
let var = chooseTopLevel varname tyThings
session <- GHC.getSession
v   <- GHC.liftIO $ GHC.getHValue session (GHC.varName var)
return (unsafeCoerce# v)

Here, I know that chooseTopLevel is working, but the getHValue part only
works without profiling.  So is this still hopeless, or do I just need
to find the right additional flags to add to dflags'?

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC API question

2011-08-28 Thread Chris Smith
Okay, I should have waited until morning to post this... so actually,
things still work fine when I build without profiling.  However, when I
build with profiling, I get the segfault.  I'm guessing either I need to
set different dynamic flags with the profiling build to match the
options of the compiler that built the executable... or perhaps it's
still impossible to do what I'm looking for with profiling enabled.
Does anyone know which is the case?

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] GHC API question

2011-08-27 Thread Chris Smith
I'm using the GHC API in GHC 7.2, and running into some problems.  For
background, I have working code that uses compileExpr to get a value
from a dynamically loaded module.  However, I'd like to do some
profiling, and it appears that compileExpr doesn't work from executables
that are built with profiling.

So instead, I tried to take a more circuitous route... I'm using
getModuleInfo and modInfoTyThings to get a list of all the declarations
in the module, and finding the one I want, which I call var.  This all
works fine, and I can print the type and the name, and I know I have the
right thing and it's got the correct type.  But then I do:

session <- getSession
v <- liftIO $ getHValue session var
return (unsafeCoerce# v)

and I get a segfault when I try to access the resulting value.  Keep in
mind that this is the same value that works fine when I access it with
compileExpr on an expression I've constructed to retrieve it.

Any ideas what's going on?  Am I missing a step?

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC API gives a weird error about .hi-boot files

2011-08-15 Thread Chris Smith
On Mon, 2011-08-15 at 03:34 -0600, Chris Smith wrote:
> Can anyone tell me what I'm doing wrong with the GHC API?

Answering my own question: I was passing OneShot as the mode instead of
CompManager.  I still don't understand why, but random trial and error
to the rescue!  That solves the problem for now... on to figure out the
SafeHaskell extension so I can get this online on a public network.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] GHC API gives a weird error about .hi-boot files

2011-08-15 Thread Chris Smith
Can anyone tell me what I'm doing wrong with the GHC API?  I'm getting
the following error message from the GHC API:

Can't find interface-file declaration for variable Main.picture
  Probable cause: bug in .hi-boot file, or inconsistent .hi file
  Use -ddump-if-trace to get an idea of which file caused the error

The source code I'm loading doesn't seem to matter at all, except that
it defines a top-level variable called 'picture'.  Of course, if I
change the module name then I get a different module name in the error,
but that's about it.

The code I'm using to load the file is as follows:

getPicture src = do
fn <- chooseFileName ".hs"
B.writeFile fn src
codeErrors <- newIORef []
GHC.defaultErrorHandler (addErrorTo codeErrors)
$ GHC.runGhc (Just GHC.libdir)
$ GHC.handleSourceError (handle codeErrors) $ do
dflags <- GHC.getSessionDynFlags
GHC.setSessionDynFlags $ dflags {
GHC.ghcMode = GHC.OneShot,
GHC.ghcLink = GHC.LinkInMemory,
GHC.hscTarget = GHC.HscInterpreted,
GHC.log_action = addErrorTo codeErrors
}
target <- GHC.guessTarget fn Nothing
GHC.setTargets [target]
r <- fmap GHC.succeeded (GHC.load GHC.LoadAllTargets)
case r of
True -> do
mods <- GHC.getModuleGraph
GHC.setContext
[ GHC.ms_mod (head mods) ]
[ GHC.simpleImportDecl
(GHC.mkModuleName "Graphics.Gloss") ]
v <- GHC.compileExpr "picture :: Picture"
return (Right (unsafeCoerce# v :: Picture))
False -> return (Left codeErrors)

I can't see anything I'm doing obviously wrong here, but I'm not at all
familiar with GHC's build process, so I'm hoping someone can pipe up and
point out the obvious stuff I've missed.

Any ideas?

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal #3339: Add (+>) as a synonym for mappend

2011-08-14 Thread Chris Smith
On Sun, 2011-08-14 at 21:05 +0300, Yitzchak Gale wrote:
> Brandon Allbery wrote:
> > Anything useful has to be modified to depend on SemiGroup as well to get
> > mconcat or its replacement; that's why you jumped the proposal to begin
> > with
> 
> Not at all. Types with Monoid instances need an additional
> instance, a Semgroup instance

That does require depending on semigroups though, and I think that's
what Brandon was saying.

Of course, the obvious solution to this would be to promote semigroups,
e.g., by adding it to the Haskell Platform or including it in base...
but the current semigroups package is a bit heavyweight for that; it
exports four new modules for what is really a very simple concept!

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type-class inference

2011-08-12 Thread Chris Smith
On Fri, 2011-08-12 at 23:52 +0100, Patrick Browne wrote:
> -- Second in the case of a method of a type class.
> -- Inferred Num
> *Main> :t  g 3
> g 3 :: forall t. (A t, Num t) => t
> -- Did not print class A.
> *Main> :t g T
> g T :: T
> -- Did not print any class.

This is because you already know that T is T.  The compiler has checked
that T is, in fact, an instance of A, but it need not tell you so
because it has information that's strictly more specific than that.

> *Main> :t g (3::Integer)
> g (3::Integer) :: Integer

Same thing.  Integer is an instance of A, so telling you it's an Integer
is even better (more specific).

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fyi GHC 7.2.1 release on the benchmarks game

2011-08-12 Thread Chris Smith
On Fri, 2011-08-12 at 11:44 -0500, austin seipp wrote:
> > 2) I noticed `-fvia-C` has now gone away [...]
> 
> I can't forsee the potential performance ramifications, but frankly
> -fvia-C has been deprecated/not-advised-for-use for quite a while now,
> and I wonder how many of these programs just have not been
> updated/tested with the native code generator since they were written.
> 
> In any case it's not an option anymore, so your only choice is to nuke
> it from orbit (orbit being the Makefiles.)

Well, the better option would be to try with the NCG, and also with LLVM
(the -fllvm flag).  While the NCG is certainly competitive for idiomatic
Haskell code, it's likely to be a bit behind when it comes to heavy
C-in-Haskell code like what often gets submitted to the shootout.  LLVM
seems likely to do better in some cases.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Difference between class and instance contexts

2011-08-03 Thread Chris Smith
On Aug 3, 2011 1:33 PM, "Patrick Browne"  wrote:
> instance Class Integer => SubClass Integer where
>moo a = foo a

Since you've just written the Class instance for Integer, the superclass
context is actually irrelevant there.  You may as well just write

instance SubClass Integer where
moo a = foo a

And that goes to the point of what the difference is.  In the first case,
you were declaring that all SubClass instances are Class instances,
and that mo defaults to foo for ALL types.  In the latter case, you're
defining this just for Integer.  The difference is whether that default
exists for other tyoes, or if its specific to Integer.

-- 
Chris Smith
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fractional Part

2011-08-02 Thread Chris Smith
On Wed, 2011-08-03 at 02:06 +0300, Ata Jafari wrote:
> In the first step I want to write a little code that can give me only  
> the decimal part of a float. For instance:

properFraction from the RealFrac type class will divide into the real
and fractional parts.  Once you've got the fractional part, converting
that into an integer is a bit trickier.

First, you should realize that it's only possible if the number has a
terminating decimal representation, which happens precisely when it is
rational, and in reduced fraction form, the denominator has only 2 and 5
as prime factors.  Conveniently, an IEEE floating point number will
always be of that form, so if you assume that the implementation uses an
IEEE floating point format, you're golden!

You'll then want to multiply both the numerator and denominator by a
common multiplier to get the number of 2s and 5s in the factorization of
the denominator to be the same.  Then the denominator is a power of 10,
so the numerator is your answer.

Some simple code might look like:

toDecimalPart x = n * (5^k)
where (_, fracPart) = properFraction x
  r = toRational fracPart
  d = denominator r
  n = numerator r
  k = log2 d

log2 1   = 0
log2 n | even n && n > 1 = 1 + log2 (n `quot` 2)
   | otherwise   = error "log2 not an integer"

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Regular Expression Parsing via derivatives

2011-08-01 Thread Chris Smith
On Mon, 2011-08-01 at 12:38 -0400, Alex Clemmer wrote:
> Hmm. Not sure how I missed that. And, I also inquired about developing
> a "core featre" instead of a library -- implying disparity where in
> retrospect there doesn't appear to be any.

Right... the only regular expression support for Haskell at all comes in
the form of libraries.  One of the nice things about Haskell is how
little has to be built in to the language.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] (no subject)

2011-07-30 Thread Chris Smith
On Sat, 2011-07-30 at 15:07 -0700, KC wrote:
> A language that runs on the JVM or .NET has the advantage of Oracle &
> Microsoft making those layers more parallelizable.

On top of the answers you've got regarding whether this exists, let me
warn you against making assumptions like the above.  There are certainly
good reasons for wanting Haskell to run on the JVM or CLR, but
parallelism doesn't look like one of them.

The problem is that the cost models of things on the JVM or CLR are so
different that if you directly expose the threading and concurrency
stuff from the JVM or CLR, you're going to kill all the Haskell bits of
parallelism.  A huge contribution of Haskell is to have very
light-weight threads, which can be spawned cheaply and can number in the
tens of thousands, if not hundreds of thousands.  If you decide that
forkIO will just spawn a new Java or CLR thread, performance of some
applications will change by orders of magnitude, or they will just plain
crash and refuse to run.  Differences of that scope are game-changing.
So you risk, not augmenting Haskell concurrency support by that of the
JVM or CLR, but rather replacing it.  And that certainly would be a
losing proposition.

Maybe there's a creative way to combine advantages from both, but it
will require something besides the obvious one-to-one mapping of
execution contexts.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] XCode Dependency for HP on Mac

2011-07-27 Thread Chris Smith
On Wed, 2011-07-27 at 07:20 -0400, Jack Henahan wrote:
> Bundling things with the HP is just going to bloat that download
> and confuse new users more (and my god, the dep-chasing...  the
> number of libs that might have to be piled in on top of it could
> be absurd).

I don't understand this.  Are you saying it would be too hard for the
Haskell Platform maintainers to build the install kits?  It seems like
bundling gcc would be just the thing to solve all the problems with the
XCode dependency (which I'm now told include not just the install-time
dependencies, but also the Haskell Platform regularly breaking with
every new operating system release).

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] XCode Dependency for HP on Mac

2011-07-27 Thread Chris Smith
On Wed, 2011-07-27 at 08:27 +0100, Tim Cowlishaw wrote:
> (Perhaps wandering slightly O/T, but...) Having done some teaching in
> similar circumstances before (although not with Haskell), I'd highly
> recommend this approach. In fact, I'd probably have all the students,
> regardless of OS install VMWare or VirtualBox, and then distribute a
> VM image with the Haskell Platform and any other tools they need
> preinstalled.

Thanks for the advice.  I'd like to avoid this, because I want to leave
the students with the impression that they have the tools to do their
own programming for their own computers when they finish... but at least
it's an option that lets Mac users have a working environment of some
sort.  I've never had any problems with the Windows installation of the
HP, and I'm knowledgeable enough to help with Linux, so I'm not worried
about those.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] XCode Dependency for HP on Mac

2011-07-27 Thread Chris Smith
Okay, you're all scaring me again.  I'm supposed to be teaching a class
this next school year, on Haskell programming, to middle schoolers aged
12 to 13.  Some of the students will be using Macs, and I'm again very
confused about the situation of the Haskell platform on MacOS.  There
are different installation requirements for different versions of the
operating system?  Is there a good complete source somewhere for
information on how to get this installed, across different versions of
MacOS, with a minimum of needing people to have the install disks that
came with their computer?

Alternatively, maybe it would it be easier to have the Mac users install
VMWare's free version and I can just have them install Windows or Linux
in that?  Or does it also have weird dependency issues like this, too?

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   >