Re: Reading floating point

2016-10-10 Thread David Feuer
It may currently be true for floats, but it's never been true in general,
particularly with regard to records. Read is not actually designed to parse
Haskell; it's for parsing "Haskell-like" things. Because it, unlike a true
Haskell parser, is type-directed, there are somewhat different trade-offs.

On Oct 11, 2016 1:50 AM, "Carter Schonwald" 
wrote:

> How is that not a bug? We should be able to read back floats
>
> On Monday, October 10, 2016, David Feuer  wrote:
>
>> It doesn't, and it never has.
>>
>> On Oct 10, 2016 6:08 PM, "Carter Schonwald" 
>> wrote:
>>
>>> Read should accept exactly the valid source literals for a type.
>>>
>>> On Monday, October 10, 2016, David Feuer  wrote:
>>>
 What does any of that have to do with the Read instances?

 On Oct 10, 2016 1:56 PM, "Carter Schonwald" 
 wrote:

> The right solution is to fix things so we have scientific notation
> literal rep available.  Any other contortions run into challenges in
> repsentavility of things.  That's of course ignoring denormalized floats,
> infinities, negative zero and perhaps nans.
>
> At the very least we need to efficiently and safely support everything
> but nan. And I have some ideas for that I hope to share soon.
>
> On Monday, October 10, 2016, David Feuer 
> wrote:
>
>> I fully expect this to be somewhat tricky, yes. But some aspects of
>> the current implementation strike me as pretty clearly non-optimal. What 
>> I
>> meant about going through Rational is that given "625e-5", say, it
>> calculates 625%10, producing a fraction in lowest terms, before 
>> calling
>> fromRational, which itself invokes fromRat'', a division function 
>> optimized
>> for a special case that doesn't seem too relevant in this context. I 
>> could
>> be mistaken, but I imagine even reducing to lowest terms is useless here.
>> The separate treatment of the digits preceding and following the decimal
>> point doesn't do anything obviously useful either. If we (effectively)
>> normalize in decimal to an integral mantissa, for example, then we can
>> convert the whole mantissa to an Integer at once; this will balance the
>> merge tree better than converting the two pieces separately and 
>> combining.
>>
>> On Oct 10, 2016 6:00 AM, "Yitzchak Gale"  wrote:
>>
>> The way I understood it, it's because the type of "floating point"
>> literals is
>>
>> Fractional a => a
>>
>> so the literal parser has no choice but to go via Rational. Once you
>> have that, you use the same parser for those Read instances to ensure
>> that the result is identical to what you would get if you parse it as
>> a literal in every case.
>>
>> You could replace the Read parsers for Float and Double with much more
>> efficient ones. But you would need to provide some other guarantee of
>> consistency with literals. That would be more difficult to achieve
>> than one might think - floating point is deceivingly tricky. There are
>> already several good parsers in the libraries, but I believe all of
>> them can provide different results than literals in some cases.
>>
>> YItz
>>
>> On Sat, Oct 8, 2016 at 10:27 PM, David Feuer 
>> wrote:
>> > The current Read instances for Float and Double look pretty iffy
>> from an
>> > efficiency standpoint. Going through Rational is exceedingly weird:
>> we have
>> > absolutely nothing to gain by dividing out the GCD, as far as I can
>> tell.
>> > Then, in doing so, we read the digits of the integral part to form
>> an
>> > Integer. This looks like a detour, and particularly bad when it has
>> many
>> > digits. Wouldn't it be better to normalize the decimal
>> representation first
>> > in some fashion (e.g., to 0.xxexxx) and go from there? Probably
>> less
>> > importantly, is there some way to avoid converting the mantissa to
>> an
>> > Integer at all? The low digits may not end up making any difference
>> > whatsoever.
>> >
>> >
>> > ___
>> > ghc-devs mailing list
>> > ghc-devs@haskell.org
>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>> >
>>
>>
>>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: when building latest GHC on Mac with Xcode 8: Symbol not found: _clock_gettime

2016-10-10 Thread John Leo
Thanks very much Brandon for your fast reply!  That did the trick.  I had
to rerun configure as well since when I didn't do that I got a different
but seemingly related error.  But after clean, configure and make
everything seems to work again.

John

On Mon, Oct 10, 2016 at 8:27 PM, Brandon Allbery 
wrote:

>
> On Mon, Oct 10, 2016 at 11:22 PM, John Leo  wrote:
>
>> I'm trying to compile ghc from the latest source and am hitting an error
>> "Symbol not found: _clock_gettime".  I'm on Mac El Capitan and recently
>> installed Xcode 8 which I'm sure is what caused the problem.  Using Google
>> I found some relevant pages including this one
>> https://mail.haskell.org/pipermail/ghc-devs/2016-July/012511.html
>>
>>
>> but I've been unable to figure out what I can do to fix the problem.  Any
>> help would be appreciated.
>>
>
> You need to download the 10.11 Command Line Tools from download.apple.com
> and reinstall them over the Xcode 8 command line tools, which are for 10.12
> and will have problems like this. (Apple intends to correct this in Xcode
> 8.1.) You need a free Mac Developer account for this, or maybe you can find
> the 10.11 tools elsewhere. You will then need to clean and rebuild ghc.
>
> --
> brandon s allbery kf8nh   sine nomine
> associates
> allber...@gmail.com
> ballb...@sinenomine.net
> unix, openafs, kerberos, infrastructure, xmonad
> http://sinenomine.net
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Reading floating point

2016-10-10 Thread David Feuer
I fully expect this to be somewhat tricky, yes. But some aspects of the
current implementation strike me as pretty clearly non-optimal. What I
meant about going through Rational is that given "625e-5", say, it
calculates 625%10, producing a fraction in lowest terms, before calling
fromRational, which itself invokes fromRat'', a division function optimized
for a special case that doesn't seem too relevant in this context. I could
be mistaken, but I imagine even reducing to lowest terms is useless here.
The separate treatment of the digits preceding and following the decimal
point doesn't do anything obviously useful either. If we (effectively)
normalize in decimal to an integral mantissa, for example, then we can
convert the whole mantissa to an Integer at once; this will balance the
merge tree better than converting the two pieces separately and combining.

On Oct 10, 2016 6:00 AM, "Yitzchak Gale"  wrote:

The way I understood it, it's because the type of "floating point" literals
is

Fractional a => a

so the literal parser has no choice but to go via Rational. Once you
have that, you use the same parser for those Read instances to ensure
that the result is identical to what you would get if you parse it as
a literal in every case.

You could replace the Read parsers for Float and Double with much more
efficient ones. But you would need to provide some other guarantee of
consistency with literals. That would be more difficult to achieve
than one might think - floating point is deceivingly tricky. There are
already several good parsers in the libraries, but I believe all of
them can provide different results than literals in some cases.

YItz

On Sat, Oct 8, 2016 at 10:27 PM, David Feuer  wrote:
> The current Read instances for Float and Double look pretty iffy from an
> efficiency standpoint. Going through Rational is exceedingly weird: we
have
> absolutely nothing to gain by dividing out the GCD, as far as I can tell.
> Then, in doing so, we read the digits of the integral part to form an
> Integer. This looks like a detour, and particularly bad when it has many
> digits. Wouldn't it be better to normalize the decimal representation
first
> in some fashion (e.g., to 0.xxexxx) and go from there? Probably less
> importantly, is there some way to avoid converting the mantissa to an
> Integer at all? The low digits may not end up making any difference
> whatsoever.
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


GHC 8.0.2 status

2016-10-10 Thread Ben Gamari
Hello GHCers,

Thanks to the work of darchon the last blocker for the 8.0.2 release
(#12479) has nearly been resolved. After the fix has been merged I'll be
doing some further testing of the ghc-8.0 branch and cut a source
tarball for 8.0.2-rc1 later this week.

If you intend on offering a binary release for 8.0.2 it would be great
if you could plan on testing the tarball promptly so we can cut 8.0.2
and move on to planning for 8.2.1.

Thanks for your help and patience!

Cheers,

- Ben


signature.asc
Description: PGP signature
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


RE: [Diffusion] [Build Failed] rGHCa6111b8cc14a: More tests for Trac #12522

2016-10-10 Thread Simon Peyton Jones via ghc-devs
This says “stat not good enough” for “max_bytes_used” on T1969.  I pushed a 
“T1969 is ok” patch recently, because it IS ok on my (64-bit Linux) machine.

If it’s not ok for our CI infrastructure, by all means un-push it or something.

Simon


From: nore...@phabricator.haskell.org [mailto:nore...@phabricator.haskell.org]
Sent: 10 October 2016 15:07
To: Simon Peyton Jones 
Subject: [Diffusion] [Build Failed] rGHCa6111b8cc14a: More tests for Trac #12522

Harbormaster failed to build B11303: rGHCa6111b8cc14a: More tests for Trac 
#12522!


BRANCHES
master

USERS
simonpj (Author)
O7 (Auditor)

COMMIT
https://phabricator.haskell.org/rGHCa6111b8cc14a

EMAIL PREFERENCES
https://phabricator.haskell.org/settings/panel/emailpreferences/

To: simonpj, Harbormaster
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Default options for -threaded

2016-10-10 Thread Eric Seidel
Ah, I'm sorry, I believe I was thinking of -qm, which is supposed to
prevent threads from being moved. I forgot these were separate options!
And the latest version of the User's Guide includes a comment about -qm

> This option is probably only of use for concurrent programs that explicitly 
> schedule threads onto CPUs with Control.Concurrent.forkOn.

which is exactly what I had to do.

On Mon, Oct 10, 2016, at 03:34, Phyx wrote:
> Oh, this is surprising, I must admit I haven't tried forkIO, but with
> forkOS is doesn't move the threads across capabilities.
> 
> Do you know if this is by design or a bug?
> 
> On Sat, Oct 8, 2016 at 6:13 PM, Eric Seidel  wrote:
> 
> > I would prefer keeping -N1 as a default, especially now that the number
> > of capabilities can be set at runtime. Programs can then use the more
> > common -j flag to enable parallelism.
> >
> > Regarding -qa, I was experimenting with it over the summer and found its
> > behavior a bit surprising. It did prevent threads from being moved
> > between capabilities, but it also forced all of the threads (created
> > with forkIO) to be *spawned* on the same capability, which was
> > unexpected. So -N -qa was, in my experience, equivalent to -N1!
> >
> > On Sat, Oct 8, 2016, at 09:55, Ben Gamari wrote:
> > > loneti...@gmail.com writes:
> > >
> > > > Hi All,
> > > >
> > > > A user on https://ghc.haskell.org/trac/ghc/ticket/11054 has asked why
> > > > -N -qa isn’t the default for -threaded.
> > > >
> > > I'm not sure that scheduling on all of the cores on the user's machine by
> > > default is a good idea, especially given that our users have
> > > learned to expect the existing default. Enabling affinity by default
> > > seems reasonable if we have evidence that it helps the majority of
> > > applications, but we would first need to introduce an additional
> > > flag to disable it.
> > >
> > > In general I think -N1 is a reasonable default as it acknowledges the
> > > fact that deploying parallelism is not something that can be done
> > > blindly in many (most?) applications. To make effective use of
> > > parallelism the user needs to understand their hardware, their
> > > application, and its interaction with the runtime system and configure
> > > the RTS appropriately.
> > >
> > > Of course, this is just my two-cents.
> > >
> > > Cheers,
> > >
> > > - Ben
> > > ___
> > > ghc-devs mailing list
> > > ghc-devs@haskell.org
> > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> > > Email had 1 attachment:
> > > + signature.asc
> > >   1k (application/pgp-signature)
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> >
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Default options for -threaded

2016-10-10 Thread Phyx
Oh, this is surprising, I must admit I haven't tried forkIO, but with
forkOS is doesn't move the threads across capabilities.

Do you know if this is by design or a bug?

On Sat, Oct 8, 2016 at 6:13 PM, Eric Seidel  wrote:

> I would prefer keeping -N1 as a default, especially now that the number
> of capabilities can be set at runtime. Programs can then use the more
> common -j flag to enable parallelism.
>
> Regarding -qa, I was experimenting with it over the summer and found its
> behavior a bit surprising. It did prevent threads from being moved
> between capabilities, but it also forced all of the threads (created
> with forkIO) to be *spawned* on the same capability, which was
> unexpected. So -N -qa was, in my experience, equivalent to -N1!
>
> On Sat, Oct 8, 2016, at 09:55, Ben Gamari wrote:
> > loneti...@gmail.com writes:
> >
> > > Hi All,
> > >
> > > A user on https://ghc.haskell.org/trac/ghc/ticket/11054 has asked why
> > > -N -qa isn’t the default for -threaded.
> > >
> > I'm not sure that scheduling on all of the cores on the user's machine by
> > default is a good idea, especially given that our users have
> > learned to expect the existing default. Enabling affinity by default
> > seems reasonable if we have evidence that it helps the majority of
> > applications, but we would first need to introduce an additional
> > flag to disable it.
> >
> > In general I think -N1 is a reasonable default as it acknowledges the
> > fact that deploying parallelism is not something that can be done
> > blindly in many (most?) applications. To make effective use of
> > parallelism the user needs to understand their hardware, their
> > application, and its interaction with the runtime system and configure
> > the RTS appropriately.
> >
> > Of course, this is just my two-cents.
> >
> > Cheers,
> >
> > - Ben
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> > Email had 1 attachment:
> > + signature.asc
> >   1k (application/pgp-signature)
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Default options for -threaded

2016-10-10 Thread Phyx
Oops, sorry, only just now seen this. It seems my overly aggressive filters
couldn't decide where to put the email :)

I do agree to some extend with this. I'd prefer if I made a mistake for my
system not to hang. The one downside to this default though is that you
can't just hand a program over to user and have it run at full capabilities.

If it possible to set this from inside a program? My guess is no, since by
the time you get to main the rts is already initialized?

Would a useful alternative be to provide a compile flag that would change
the default? e.g. opt-in? Since now there is a small burden on the end user.

Cheers,
Tamar

On Sat, Oct 8, 2016 at 5:55 PM, Ben Gamari  wrote:

> loneti...@gmail.com writes:
>
> > Hi All,
> >
> > A user on https://ghc.haskell.org/trac/ghc/ticket/11054 has asked why
> > -N -qa isn’t the default for -threaded.
> >
> I'm not sure that scheduling on all of the cores on the user's machine by
> default is a good idea, especially given that our users have
> learned to expect the existing default. Enabling affinity by default
> seems reasonable if we have evidence that it helps the majority of
> applications, but we would first need to introduce an additional
> flag to disable it.
>
> In general I think -N1 is a reasonable default as it acknowledges the
> fact that deploying parallelism is not something that can be done
> blindly in many (most?) applications. To make effective use of
> parallelism the user needs to understand their hardware, their
> application, and its interaction with the runtime system and configure
> the RTS appropriately.
>
> Of course, this is just my two-cents.
>
> Cheers,
>
> - Ben
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Reading floating point

2016-10-10 Thread Yitzchak Gale
The way I understood it, it's because the type of "floating point" literals is

Fractional a => a

so the literal parser has no choice but to go via Rational. Once you
have that, you use the same parser for those Read instances to ensure
that the result is identical to what you would get if you parse it as
a literal in every case.

You could replace the Read parsers for Float and Double with much more
efficient ones. But you would need to provide some other guarantee of
consistency with literals. That would be more difficult to achieve
than one might think - floating point is deceivingly tricky. There are
already several good parsers in the libraries, but I believe all of
them can provide different results than literals in some cases.

YItz

On Sat, Oct 8, 2016 at 10:27 PM, David Feuer  wrote:
> The current Read instances for Float and Double look pretty iffy from an
> efficiency standpoint. Going through Rational is exceedingly weird: we have
> absolutely nothing to gain by dividing out the GCD, as far as I can tell.
> Then, in doing so, we read the digits of the integral part to form an
> Integer. This looks like a detour, and particularly bad when it has many
> digits. Wouldn't it be better to normalize the decimal representation first
> in some fashion (e.g., to 0.xxexxx) and go from there? Probably less
> importantly, is there some way to avoid converting the mantissa to an
> Integer at all? The low digits may not end up making any difference
> whatsoever.
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Allow top-level shadowing for imported names?

2016-10-10 Thread Yitzchak Gale
Michael Sloan wrote:
> It is really good to think in terms of a cleverness budget...
> Here are the things I see in favor of this proposal:
>
> 1) It is common practice to use -Wall...
> 2) It lets us do things that are otherwise quite inconvenient...

You missed the most important plus:

0) It fixes an inconsistency and thus simplifies Haskell syntax.

So in my opinion this is not a cleverness proposal, it's a
simplication.

> 2) There is no good way to use this feature without creating a
> warning.

I'm not sure what you mean. There is already a warning for
shadowing. Except shadowing from imports, where it is an
error instead. The proposal is to eliminate that special
case.

>  I would like to be explicit in my name shadowing  I'm
> thinking a pragma like {-# NO_WARN myFunction #-},
> or, better yet, the more specific
> {-# SHADOWING myFunction #-} orso.

The same applies to shadowing in every other context.
Adding such a pragma might indeed be a nice idea. But
it should apply consistently to shadowing in all contexts,
not just for import shadowing. In any case, this
would be a separate proposal.

-Yitz
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs