Re: Trouble committing

2014-12-13 Thread Erik de Castro Lopo
Andreas Voellmy wrote:

> Hi GHCers,
> 
> I just fixed a bug (#9423) and went through the Phab workflow. Then I did a
> fresh checkout from git and ran:
> 
> $ git checkout master
> $ arc patch --nobranch D129
> $ git push origin master
> 
> as explained on https://ghc.haskell.org/trac/ghc/wiki/Phabricator, but on
> the last command I get this error:
> 
> fatal: remote error: access denied or repository not exported: /ghc.git
> 
> Maybe I just no longer have commit access to ghc?

Andi,

Did you get a response to this? I seem to be in the same boat for D570.

Cheers,
Erik
-- 
--
Erik de Castro Lopo
http://www.mega-nerd.com/
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: ANNOUNCE: GHC 7.8.4 Release Candidate 1

2014-12-13 Thread Mikolaj Konarski
OK. In that case, let's remember to get *that* version of cabal into 7.8.4.


/me conditions himself with chocolate to help the remembering

On Sat, Dec 13, 2014 at 11:02 PM, Thomas Tuegel  wrote:
> On Sat, Dec 13, 2014 at 3:56 PM, Mikolaj Konarski
>  wrote:
>> On Sat, Dec 13, 2014 at 8:53 PM, Carter Schonwald
>>  wrote:
>>> Thomas and I have found some bugs in HPC on OSX, and we're in the midst of
>>> tracking those down,
>>
>> You mean these are regressions? If they are introduced
>> in one of the non-blocker fixes in 7.8.4, we can probably
>> just revert them. Anyway, thanks a lot for testing.
>
> Sorry, these are not regressions. It's really a bug in Cabal which
> will be fixed in 1.22. When Carter wrote this, we thought the problem
> was with the GHC side of HPC because of some very vague error
> messages.
>
> --
> Thomas Tuegel
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: ANNOUNCE: GHC 7.8.4 Release Candidate 1

2014-12-13 Thread Thomas Tuegel
On Sat, Dec 13, 2014 at 3:56 PM, Mikolaj Konarski
 wrote:
> On Sat, Dec 13, 2014 at 8:53 PM, Carter Schonwald
>  wrote:
>> Thomas and I have found some bugs in HPC on OSX, and we're in the midst of
>> tracking those down,
>
> You mean these are regressions? If they are introduced
> in one of the non-blocker fixes in 7.8.4, we can probably
> just revert them. Anyway, thanks a lot for testing.

Sorry, these are not regressions. It's really a bug in Cabal which
will be fixed in 1.22. When Carter wrote this, we thought the problem
was with the GHC side of HPC because of some very vague error
messages.

-- 
Thomas Tuegel
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: ANNOUNCE: GHC 7.8.4 Release Candidate 1

2014-12-13 Thread Mikolaj Konarski
On Sat, Dec 13, 2014 at 8:53 PM, Carter Schonwald
 wrote:
> Thomas and I have found some bugs in HPC on OSX, and we're in the midst of
> tracking those down,

You mean these are regressions? If they are introduced
in one of the non-blocker fixes in 7.8.4, we can probably
just revert them. Anyway, thanks a lot for testing.

Cheers,
Mikolaj
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Program runs out of memory using GHC 7.6.3

2014-12-13 Thread David Spies
I tried adding strictness to everything, forcing each line with "evaluate .
force"

It still runs out of memory and now running with -hc blames the extra
memory on "trace elements" which seems somewhat unhelpful.


On Sat, Dec 13, 2014 at 2:10 PM, David Spies  wrote:
>
> I think there's some confusion about makeCounts's behavior.  makeCount
> never traverses the same thing twice.  Essentially, the worst-case size of
> the unevaluated thunks doesn't exceed the total size of the array of lists
> that was used to create them (and that array itself was created with
> accumArray which is strict).
> Nonetheless, I've tried adding strictness all over makeCounts and it
> reduces the memory usage a little bit, but it still fails a later input
> instance with OOM.  It's not a significant reduction like in GHC 7.8.3
>
>
> On Sat, Dec 13, 2014 at 3:06 AM, Matthias Fischmann 
> wrote:
>>
>>
>> Hi David,
>>
>> I don't think this is a ghc issue.
>>
>> I suspect you have too many unevaluated function calls lying around
>> (this would cause the runtime to run out of *stack* as opposed to
>> *heap*).  Different versions of ghc perform different optimizations on
>> your code, and 7.8 knows a way to fix it that 7.6 doesn't know.
>>
>> This is usually solved by adding strictness: Instead of letting the
>> unevaluated function calls pile up, you force them (e.g. with `print`
>> or `Control.DeepSeq.deepseq`).
>>
>> I would take a closer look at your makeCounts function: you call
>> traverse the input list, and traverse the entire list (starting from
>> each element) again in each round.  Either you should find a way to
>> iterate only once and accumulate all the data you need, or you should
>> start optimizing there.
>>
>> hope this helps,
>> cheers,
>> matthias
>>
>>
>> On Sat, Dec 13, 2014 at 02:06:52AM -0700, David Spies wrote:
>> > Date: Sat, 13 Dec 2014 02:06:52 -0700
>> > From: David Spies 
>> > To: "ghc-devs@haskell.org" 
>> > Subject: Program runs out of memory using GHC 7.6.3
>> >
>> > I have a program I submitted for a Kattis problem:
>> > https://open.kattis.com/problems/digicomp2
>> > But I got memory limit exceeded.  I downloaded the test data and ran the
>> > program on my own computer without problems.  Eventually I found out
>> that
>> > when compiling with GHC 7.6.3 (the version Kattis uses) rather than
>> 7.8.3,
>> > this program runs out of memory.
>> > Can someone explain why it only works on the later compiler?  Is there a
>> > workaround so that I can submit to Kattis?
>> >
>> > Thanks,
>> > David
>>
>> > module Main(main) where
>> >
>> > import   Control.Monad
>> > import   Data.Array
>> > import qualified Data.ByteString.Char8 as BS
>> > import   Data.Int
>> > import   Data.Maybe
>> >
>> > readAsInt :: BS.ByteString -> Int
>> > readAsInt = fst . fromJust . BS.readInt
>> >
>> > readVert :: IO Vert
>> > readVert = do
>> >   [s, sl, sr] <- liftM BS.words BS.getLine
>> >   return $ V (fromBS s) (readAsInt sl) (readAsInt sr)
>> >
>> > main::IO()
>> > main = do
>> >   [n, m64] <- liftM (map read . words) getLine :: IO [Int64]
>> >   let m = fromIntegral m64 :: Int
>> >   verts <- replicateM m readVert
>> >   let vside = map getSide verts
>> >   let vpar = concat $ zipWith makeAssoc [1..] verts
>> >   let parArr = accumArray (flip (:)) [] (1, m) vpar
>> >   let counts = makeCounts n m $ elems parArr
>> >   let res = zipWith doFlips counts vside
>> >   putStrLn $ map toChar res
>> >
>> > doFlips :: Int64 -> Side -> Side
>> > doFlips n
>> >   | odd n = flipSide
>> >   | otherwise = id
>> >
>> > makeCounts :: Int64 -> Int -> [[(Int, Round)]] -> [Int64]
>> > makeCounts n m l = tail $ elems res
>> >   where
>> > res = listArray (0, m) $ 0 : n : map makeCount (tail l)
>> > makeCount :: [(Int, Round)] -> Int64
>> > makeCount = sum . map countFor
>> > countFor :: (Int, Round) -> Int64
>> > countFor (i, Up) = ((res ! i) + 1) `quot` 2
>> > countFor (i, Down) = (res ! i) `quot` 2
>> >
>> > fromBS :: BS.ByteString -> Side
>> > fromBS = fromChar . BS.head
>> >
>> > fromChar :: Char -> Side
>> > fromChar 'L' = L
>> > fromChar 'R' = R
>> > fromChar _ = error "Bad char"
>> >
>> > toChar :: Side -> Char
>> > toChar L = 'L'
>> > toChar R = 'R'
>> >
>> > makeAssoc :: Int -> Vert -> [(Int, (Int, Round))]
>> > makeAssoc n (V L a b) = filtPos [(a, (n, Up)), (b, (n, Down))]
>> > makeAssoc n (V R a b) = filtPos [(a, (n, Down)), (b, (n, Up))]
>> >
>> > filtPos :: [(Int, a)] -> [(Int, a)]
>> > filtPos = filter ((> 0) . fst)
>> >
>> > data Vert = V !Side !Int !Int
>> >
>> > getSide :: Vert -> Side
>> > getSide (V s _ _) = s
>> >
>> > data Side = L | R
>> >
>> > data Round = Up | Down
>> >
>> > flipSide :: Side -> Side
>> > flipSide L = R
>> > flipSide R = L
>>
>>
>> > ___
>> > ghc-devs mailing list
>> > ghc-devs@haskell.org
>> > http://www.haskell.org/mailman/listinfo/ghc-devs
>>
>
___
ghc-devs ma

Re: Program runs out of memory using GHC 7.6.3

2014-12-13 Thread David Spies
I think there's some confusion about makeCounts's behavior.  makeCount
never traverses the same thing twice.  Essentially, the worst-case size of
the unevaluated thunks doesn't exceed the total size of the array of lists
that was used to create them (and that array itself was created with
accumArray which is strict).
Nonetheless, I've tried adding strictness all over makeCounts and it
reduces the memory usage a little bit, but it still fails a later input
instance with OOM.  It's not a significant reduction like in GHC 7.8.3


On Sat, Dec 13, 2014 at 3:06 AM, Matthias Fischmann  wrote:
>
>
> Hi David,
>
> I don't think this is a ghc issue.
>
> I suspect you have too many unevaluated function calls lying around
> (this would cause the runtime to run out of *stack* as opposed to
> *heap*).  Different versions of ghc perform different optimizations on
> your code, and 7.8 knows a way to fix it that 7.6 doesn't know.
>
> This is usually solved by adding strictness: Instead of letting the
> unevaluated function calls pile up, you force them (e.g. with `print`
> or `Control.DeepSeq.deepseq`).
>
> I would take a closer look at your makeCounts function: you call
> traverse the input list, and traverse the entire list (starting from
> each element) again in each round.  Either you should find a way to
> iterate only once and accumulate all the data you need, or you should
> start optimizing there.
>
> hope this helps,
> cheers,
> matthias
>
>
> On Sat, Dec 13, 2014 at 02:06:52AM -0700, David Spies wrote:
> > Date: Sat, 13 Dec 2014 02:06:52 -0700
> > From: David Spies 
> > To: "ghc-devs@haskell.org" 
> > Subject: Program runs out of memory using GHC 7.6.3
> >
> > I have a program I submitted for a Kattis problem:
> > https://open.kattis.com/problems/digicomp2
> > But I got memory limit exceeded.  I downloaded the test data and ran the
> > program on my own computer without problems.  Eventually I found out that
> > when compiling with GHC 7.6.3 (the version Kattis uses) rather than
> 7.8.3,
> > this program runs out of memory.
> > Can someone explain why it only works on the later compiler?  Is there a
> > workaround so that I can submit to Kattis?
> >
> > Thanks,
> > David
>
> > module Main(main) where
> >
> > import   Control.Monad
> > import   Data.Array
> > import qualified Data.ByteString.Char8 as BS
> > import   Data.Int
> > import   Data.Maybe
> >
> > readAsInt :: BS.ByteString -> Int
> > readAsInt = fst . fromJust . BS.readInt
> >
> > readVert :: IO Vert
> > readVert = do
> >   [s, sl, sr] <- liftM BS.words BS.getLine
> >   return $ V (fromBS s) (readAsInt sl) (readAsInt sr)
> >
> > main::IO()
> > main = do
> >   [n, m64] <- liftM (map read . words) getLine :: IO [Int64]
> >   let m = fromIntegral m64 :: Int
> >   verts <- replicateM m readVert
> >   let vside = map getSide verts
> >   let vpar = concat $ zipWith makeAssoc [1..] verts
> >   let parArr = accumArray (flip (:)) [] (1, m) vpar
> >   let counts = makeCounts n m $ elems parArr
> >   let res = zipWith doFlips counts vside
> >   putStrLn $ map toChar res
> >
> > doFlips :: Int64 -> Side -> Side
> > doFlips n
> >   | odd n = flipSide
> >   | otherwise = id
> >
> > makeCounts :: Int64 -> Int -> [[(Int, Round)]] -> [Int64]
> > makeCounts n m l = tail $ elems res
> >   where
> > res = listArray (0, m) $ 0 : n : map makeCount (tail l)
> > makeCount :: [(Int, Round)] -> Int64
> > makeCount = sum . map countFor
> > countFor :: (Int, Round) -> Int64
> > countFor (i, Up) = ((res ! i) + 1) `quot` 2
> > countFor (i, Down) = (res ! i) `quot` 2
> >
> > fromBS :: BS.ByteString -> Side
> > fromBS = fromChar . BS.head
> >
> > fromChar :: Char -> Side
> > fromChar 'L' = L
> > fromChar 'R' = R
> > fromChar _ = error "Bad char"
> >
> > toChar :: Side -> Char
> > toChar L = 'L'
> > toChar R = 'R'
> >
> > makeAssoc :: Int -> Vert -> [(Int, (Int, Round))]
> > makeAssoc n (V L a b) = filtPos [(a, (n, Up)), (b, (n, Down))]
> > makeAssoc n (V R a b) = filtPos [(a, (n, Down)), (b, (n, Up))]
> >
> > filtPos :: [(Int, a)] -> [(Int, a)]
> > filtPos = filter ((> 0) . fst)
> >
> > data Vert = V !Side !Int !Int
> >
> > getSide :: Vert -> Side
> > getSide (V s _ _) = s
> >
> > data Side = L | R
> >
> > data Round = Up | Down
> >
> > flipSide :: Side -> Side
> > flipSide L = R
> > flipSide R = L
>
>
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Program runs out of memory using GHC 7.6.3

2014-12-13 Thread David Spies
I tried all optimization levels of 7.6.3 and it runs out of memory
I tried all optimization levels of 7.8.3 and it doesn't

So it must be something the compiler does even without any optimization.

On Sat, Dec 13, 2014 at 3:05 AM, Mikolaj Konarski 
wrote:
>
> tt may be that GHC 7.8 optimizes the program better.
> Compile with -O0 and see if it runs out of memory, too.
> If so, you can just optimize the program by hand.
> I'd suggest making a heap profilie with -O0 or in GHC 7.6
> and finding out where the memory goes.
>
> Of course, it's possible you've hit a compiler bug,
> but it makes sense not to start with that assumption.
>
> Have fun,
> Mikolaj
>
> On Sat, Dec 13, 2014 at 10:06 AM, David Spies  wrote:
> > I have a program I submitted for a Kattis problem:
> > https://open.kattis.com/problems/digicomp2
> > But I got memory limit exceeded.  I downloaded the test data and ran the
> > program on my own computer without problems.  Eventually I found out that
> > when compiling with GHC 7.6.3 (the version Kattis uses) rather than
> 7.8.3,
> > this program runs out of memory.
> > Can someone explain why it only works on the later compiler?  Is there a
> > workaround so that I can submit to Kattis?
> >
> > Thanks,
> > David
> >
> >
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://www.haskell.org/mailman/listinfo/ghc-devs
> >
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Please test: release candidates for Cabal/cabal-install patch releases on the 1.18 and 1.20 branches

2014-12-13 Thread Ben Gamari
Ben Gamari  writes:

> Johan Tibell  writes:
>
>> Ben,
>>
>> Is this something that worked in cabal-install 1.18.0.5 and that stopped
>> working in 1.18.0.6 or is it something that didn't work in 1.18.0.5 but you
>> expected to be fixed in 1.18.0.6? These 1.18 and 1.20 releases just target
>> a very few critical bugs. They are not attempts to backport all bugfixes
>> from master.
>>
> Fair enough; ignore the first issue in that case. Nevertheless, given
> that the network-2.6 fix made it in I think it would be worth
> cherry-picking the network-uri fix as well so that the release can be
> properly tested.
>
> Things look pretty good to me otherwise. I'll test 1.20 next.
>
1.20 looks good to me.

Cheers,

- Ben



pgpkJKFb6OfiC.pgp
Description: PGP signature
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: ANNOUNCE: GHC 7.8.4 Release Candidate 1

2014-12-13 Thread Carter Schonwald
Thomas and I have found some bugs in HPC on OSX, and we're in the midst of
tracking those down,

Those fixes should get into 7.8.4 and 7.10 both.  Currently HPC on OSX is
broken in pretty fundamental ways, and thats not ok!

-Carter


On Wed, Dec 3, 2014 at 5:53 PM, George Colpitts 
wrote:
>
> Would it be possible to get a RC for the Mac up at
> https://downloads.haskell.org/~ghc/7.8.4-rc1/ ?
>
> Thanks
> George
>
>
> On Wed, Nov 26, 2014 at 10:31 AM, Herbert Valerio Riedel 
> wrote:
>
>> On 2014-11-26 at 12:40:37 +0100, Sven Panne wrote:
>> > 2014-11-25 20:46 GMT+01:00 Austin Seipp :
>> >> We are pleased to announce the first release candidate for GHC 7.8.4:
>> >>
>> >> https://downloads.haskell.org/~ghc/7.8.4-rc1/ [...]
>> >
>> > Would it be possible to get the RC on
>> > https://launchpad.net/~hvr/+archive/ubuntu/ghc? This way one could
>> > easily test things on Travis CI.
>>
>> I'll put a 7.8.4rc .deb up soon (probably right after the GHC 7.10
>> branch has been created)
>> ___
>> Glasgow-haskell-users mailing list
>> glasgow-haskell-us...@haskell.org
>> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
>>
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: performance regressions

2014-12-13 Thread Richard Eisenberg
Fixed, hopefully!

On Dec 13, 2014, at 10:03 AM, Richard Eisenberg  wrote:

> I think I've fixed this. I've pushed the fix to wip/rae, and waiting for 
> validation results before pushing to master.
> 
> My hunch below was right -- it was the change to matchFam, which essentially 
> evaluated type-level functions more strictly. I've now made it lazier again. 
> I'd like to better understand the tradeoff here, and to see if there's a 
> principled sweet spot. But that will happen in a few days.
> 
> Expect a push to master soon.
> 
> Again, sorry for the bother.
> 
> Richard
> 
> On Dec 13, 2014, at 8:32 AM, Joachim Breitner  
> wrote:
> 
>> Hi,
>> 
>> 
>> Am Freitag, den 12.12.2014, 21:51 -0500 schrieb Richard Eisenberg:
>>> 
>>> Phab has shown up some performance regressions in my recent commits.
>>> See https://phabricator.haskell.org/harbormaster/build/2607/. The
>>> failures except for haddock.base are new, and evidently my fault. They
>>> didn't show up on Travis. Will look into it shortly, but I doubt over
>>> the weekend.
>> 
>> 
>> ghcspeed also observes this:
>> http://ghcspeed-nomeata.rhcloud.com/changes/?rev=7256213843b80d75a86f033be77516a62d56044a&exe=2&env=johan%27s%20buildbot
>> 
>> Especially the T9872 benchmarks have a huge increase in allocations. But
>> you seem to be aware of this, so that’s fine.
>> 
>> Greetings,
>> Joachim
>> 
>> -- 
>> Joachim “nomeata” Breitner
>> m...@joachim-breitner.de • http://www.joachim-breitner.de/
>> Jabber: nome...@joachim-breitner.de  • GPG-Key: 0xF0FBF51F
>> Debian Developer: nome...@debian.org
>> 
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://www.haskell.org/mailman/listinfo/ghc-devs
> 
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
> 

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: more parser conflicts?

2014-12-13 Thread Sergei Trofimovich
On Wed, 03 Dec 2014 11:59:42 +
Simon Marlow  wrote:

> >> In unrelated work, I saw this scroll across when happy'ing the parser:
> >>
> >>> shift/reduce conflicts:  60
> >>> reduce/reduce conflicts: 16
> >>
> >> These numbers seem quite a bit higher than what I last remember (which
> >> is something like 48 and 1, not 60 and 16). Does anyone know why?

4 of reduce/reduce conflicts are result of exact rule copy:
https://phabricator.haskell.org/D569

> reduce/reduce conflicts are bad, especially so since they're 
> undocumented.  We don't know whether this introduced parser bugs or not. 
>   Mike - could you look at this please?  It was your commit that 
> introduced the new conflicts.

Agreed.

11 more reduce/reduce (of left 12) came from single scary rule
added in
> commit bc2289e13d9586be087bd8136943dc35a0130c88
>ghc generates more user-friendly error messages

exp10 :: { LHsExpr RdrName }
...
 | 'let' binds {% parseErrorSDoc (combineLocs $1 $2) $ text
 "parse error in let binding: missing required 'in'"
}

The other rules add shift/reduce conflicts as follows:

-- parsing error messages go below here
{- s/r:1 r/r:0 -}
| '\\' apat apats opt_asig '->' {% parseErrorSDoc (combineLocs $1 $5) $ text
   "parse error in lambda: 
no expression after '->'"
{- s/r:1 r/r:0 -}
| '\\'   {% parseErrorSDoc (getLoc $1) 
$ text
   "parse error: naked 
lambda expression '\'"
}
{- s/r:1 r/r:0 -}
| 'let' binds 'in'   {% parseErrorSDoc (combineLocs 
$1 $2) $ text
   "parse error in let 
binding: missing expression after 'in'"
}
{- s/r:0 r/r:11 -}
 | 'let' binds{% parseErrorSDoc 
(combineLocs $1 $2) $ text
"parse error in let 
binding: missing required 'in'"
 }
{- s/r:0 r/r:0 -}
| 'let'  {% parseErrorSDoc (getLoc 
$1) $ text
   "parse error: naked let 
binding"
   }
{- s/r:1 r/r:0 -}
| 'if' exp optSemi 'then' exp optSemi 'else' {% hintIf (combineLocs $1 
$5) "else clause empty" }
{- s/r:2 r/r:0 -}
| 'if' exp optSemi 'then' exp optSemi{% hintIf (combineLocs $1 
$5) "missing required else clause" }
{- s/r:1 r/r:0 -}
| 'if' exp optSemi 'then'{% hintIf (combineLocs $1 
$2) "then clause empty" }
{- s/r:2 r/r:0 -}
| 'if' exp optSemi   {% hintIf (combineLocs $1 
$2) "missing required then and else clauses"
{- s/r:2 r/r:0 -}
| 'if'   {% hintIf (getLoc $1) 
"naked if statement" }
{- s/r:0 r/r:0 -}
| 'case' exp 'of'{% parseErrorSDoc 
(combineLocs $1 $2) $ text
"parse error in case 
statement: missing list after '->'"
  }
{- s/r:1 r/r:0 -}
| 'case' exp {% parseErrorSDoc 
(combineLocs $1 $2) $ text
   "parse error in case 
statement: missing required 'of'"
 }
{- s/r:1 r/r:0 -}
| 'case' {% parseErrorSDoc (getLoc 
$1) $ text
"parse error: naked 
case statement"
  }

Shift/reduces look harmless (like MultiWayIf ambiguity)
as they seem to resolve as shift correctly.

-- 

  Sergei


signature.asc
Description: PGP signature
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: performance regressions

2014-12-13 Thread Richard Eisenberg
I think I've fixed this. I've pushed the fix to wip/rae, and waiting for 
validation results before pushing to master.

My hunch below was right -- it was the change to matchFam, which essentially 
evaluated type-level functions more strictly. I've now made it lazier again. 
I'd like to better understand the tradeoff here, and to see if there's a 
principled sweet spot. But that will happen in a few days.

Expect a push to master soon.

Again, sorry for the bother.

Richard

On Dec 13, 2014, at 8:32 AM, Joachim Breitner  wrote:

> Hi,
> 
> 
> Am Freitag, den 12.12.2014, 21:51 -0500 schrieb Richard Eisenberg:
>> 
>> Phab has shown up some performance regressions in my recent commits.
>> See https://phabricator.haskell.org/harbormaster/build/2607/. The
>> failures except for haddock.base are new, and evidently my fault. They
>> didn't show up on Travis. Will look into it shortly, but I doubt over
>> the weekend.
> 
> 
> ghcspeed also observes this:
> http://ghcspeed-nomeata.rhcloud.com/changes/?rev=7256213843b80d75a86f033be77516a62d56044a&exe=2&env=johan%27s%20buildbot
> 
> Especially the T9872 benchmarks have a huge increase in allocations. But
> you seem to be aware of this, so that’s fine.
> 
> Greetings,
> Joachim
> 
> -- 
> Joachim “nomeata” Breitner
>  m...@joachim-breitner.de • http://www.joachim-breitner.de/
>  Jabber: nome...@joachim-breitner.de  • GPG-Key: 0xF0FBF51F
>  Debian Developer: nome...@debian.org
> 
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: performance regressions

2014-12-13 Thread Joachim Breitner
Hi,


Am Freitag, den 12.12.2014, 21:51 -0500 schrieb Richard Eisenberg:
> 
> Phab has shown up some performance regressions in my recent commits.
> See https://phabricator.haskell.org/harbormaster/build/2607/. The
> failures except for haddock.base are new, and evidently my fault. They
> didn't show up on Travis. Will look into it shortly, but I doubt over
> the weekend.


ghcspeed also observes this:
http://ghcspeed-nomeata.rhcloud.com/changes/?rev=7256213843b80d75a86f033be77516a62d56044a&exe=2&env=johan%27s%20buildbot

Especially the T9872 benchmarks have a huge increase in allocations. But
you seem to be aware of this, so that’s fine.

Greetings,
Joachim

-- 
Joachim “nomeata” Breitner
  m...@joachim-breitner.de • http://www.joachim-breitner.de/
  Jabber: nome...@joachim-breitner.de  • GPG-Key: 0xF0FBF51F
  Debian Developer: nome...@debian.org



signature.asc
Description: This is a digitally signed message part
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Fwd: Garbage collection

2014-12-13 Thread Facundo Domínguez
> So technically, your example might need to involve using g (and forceful GC 
> at a certain point during execution)

Good observation.

> Maybe a stupid question, sorry: The RemoteTable generated using
> template-haskell in CH without XStaticPointers would keep CAFs alive. So the
> XStaticPointers extension does not entail using such a table?

That's correct. The extension is a substitute for the remote table. In
addition, it has the compiler do what remote tables demanded from the
user:
 * adding functions to the remote table before they are looked up,
 * collecting the table pieces from the various modules into a global table.

> Another question: Would it be sufficient to desugar "static g" to
> g `seq` StaticPtr(StaticName "" "Main" "g")
> instead of introducing a stable ptr and all that?

This keeps g alive only while the expression is not evaluated to HNF.
The solution I proposed is flawed as well, since it relies on the
desugared static form being evaluated to HNF for the CAF to be
referenced with a StablePtr.

Anyway, after this much time we figured out how to implement the
static pointer table.

> Finally, there is a flag keepCAFs in the runtime which you can set to secure
> the CAFs for the entire run. The parallel runtimes for Eden and GUM (as well
> as my "packman" serialisation) do this.

Good to know about that.

Thank you,
Facundo


> AFter all, g is a CAF, so it is anyway "stable" in some sense, as long as it
> is alive.
>
> However, I conjecture that this only fixes the one-node test, not the actual
> use case (sending "static" stuff over the wire).
>
> Finally, there is a flag keepCAFs in the runtime which you can set to secure
> the CAFs for the entire run. The parallel runtimes for Eden and GUM (as well
> as my "packman" serialisation) do this.

Facundo

On Tue, Nov 18, 2014 at 1:20 PM, Jost Berthold
 wrote:
> Hi Facundo,
>
> You are completely right, the CAF named "g" might be accessed at any time
> during the program execution. Parallel Haskell systems with distributed heap
> (and runtime-supported serialisation) need to keep all CAFS alive for this
> reason.
>
> Some comments inline along your mail:
>
>>   While working in the StaticPointers language extension [1], we
>> found we have some unusual CAFs which can be accessed after some
>> periods of time where there is no reference to them.
>>
>>   For instance, the following program when compiled contains no
>> reference to `g`. `g` is actually looked up at runtime in symbol
>> tables via the call to `deRefStaticPtr`.
>> g :: String
>> g = "hello"
>>
>> main =
>>   deRefStaticPtr (static g) >>= putStrLn
>
>
> The bad scenario is certainly one where CAF g (a static thunk) is evaluated
> during execution (i.e. turned into an indirection into the heap), and then
> garbage-collected, as it might not be referenced by any (runnable) thread.
> This GC does not revert the indirection into a thunk. Why should it, there
> are no references to it, right? ;-)
>
> So technically, your example might need to involve using g (and forceful GC
> at a certain point during execution):
>
> main = putStrLn g >> performGC >>
>deRefStaticPtr (static g) >>= putStrLn
>
>>
>> Desugars to:
>>
>> g :: String
>> g = "hello"
>>
>> main =
>
> putStrLn g >> performGC >>
>>
>>   deRefStaticPtr (StaticPtr (StaticName "" "Main" "g")) >>= putStrLn
>
>
> During performGC, there would be no reference to g from any thread's stack.
> I am of course assuming that g is indeed a thunk, and not statically
> evaluated to a string during compilation (I am unsure whether GHC would do
> that).
>
>> In principle, there is nothing stopping the garbage collector from
>> reclaiming the closure of `g` before it is dynamically looked up.
>
>
> Maybe a stupid question, sorry: The RemoteTable generated using
> template-haskell in CH without XStaticPointers would keep CAFs alive. So the
> XStaticPointers extension does not entail using such a table?
>
>> We are considering using StablePtrs to preserve `g`. So the code
>> desugars instead to:
>>
>> g :: String
>> g = "hello"
>>
>> main =
>>   deRefStaticPtr (let x = StaticPtr (StaticName "" "Main" "g")
>>  in unsafePerformIO $ newStablePtr g >> return x
>> ) >>= putStrLn
>>
>
> Another question: Would it be sufficient to desugar "static g" to
> g `seq` StaticPtr(StaticName "" "Main" "g")
> instead of introducing a stable ptr and all that?
> AFter all, g is a CAF, so it is anyway "stable" in some sense, as long as it
> is alive.
>
> However, I conjecture that this only fixes the one-node test, not the actual
> use case (sending "static" stuff over the wire).
>
> Finally, there is a flag keepCAFs in the runtime which you can set to secure
> the CAFs for the entire run. The parallel runtimes for Eden and GUM (as well
> as my "packman" serialisation) do this.
>
> Yes, obviously, this opens a memory leak. It would be nice to not "keep" but
> "revert" the CAFs (ghci does

Re: D538 and compiler performance spec

2014-12-13 Thread Alan & Kim Zimmerman
Ok, I backed it out for all but the compound cases and the performance test
once more passes, and I can round-trip compound RdrNames.

On Fri, Dec 12, 2014 at 11:30 PM, Alan & Kim Zimmerman 
wrote:
>
> On reflection, I can try to make it work with annotations just for those
> fairly rare cases where there are parens/backquotes, and use the location
> span otherwise.
>
> On Fri, Dec 12, 2014 at 11:20 PM, Alan & Kim Zimmerman <
> alan.z...@gmail.com> wrote:
>>
>> The problem is round-tripping cases like this, which are valid
>>
>> ( /// ) :: Int -> Int -> Int
>> a /// b = 3
>>
>> baz :: Int -> Int -> Int
>> a ` baz ` b = 4
>>
>> There can be arbitrary spaces between the surrounding parens and the
>> operator name, and between the backquotes and the identifier in the infix
>> version.
>>
>> In each case we simply get a RdrName, which in turn is wrapped in HsVar
>> or whatever.
>>
>> The D538 productions are of the form
>>
>> var :: { Located RdrName }
>> : varid { $1 }
>> | '(' varsym ')'{% ams (sLL $1 $> (unLoc $2))
>>[mo $1,mj AnnVal $2,mc $3] }
>>
>> and
>>
>> tyvarop :: { Located RdrName }
>> tyvarop : '`' tyvarid '`'   {% ams (sLL $1 $> (unLoc $2))
>>[mj AnnBackquote $1,mj AnnVal
>> $2
>>,mj AnnBackquote $3] }
>>
>> So the location tracks the entire span, but we need annotations for the
>> three individual parts.
>>
>> Note: I did not check how far close to the limit the performance was
>> prior to this change, it may have been the last 1% to take it over.
>>
>> Alan
>>
>>
>> On Fri, Dec 12, 2014 at 11:03 PM, Simon Peyton Jones <
>> simo...@microsoft.com> wrote:
>>>
>>>   I am now adding an `AnnVal` to every RdrName, to be able to separate
>>> it out from any decoration, such as surrounding backticks or parens.
>>>
>>>
>>>
>>> That seems like overkill to me.  (a `op` b) is an HsOpApp, and must of
>>> course have backticks unless op is an operator like (a + b), in which case
>>> it doesn’t.
>>>
>>>
>>>
>>> The corner case is something like ((`op`) a b), which will parse as
>>> (HsApp (HsApp (HsVar op) (HsVar a)) (HsVar b)).  But it would be silly for
>>> us to get bent out of shape because of such a vanishingly rare corner
>>> case.  Instead, if you really want to reflect it faithfully, add a new
>>> constructor for “parens around backticks”).
>>>
>>>
>>>
>>> Let’s only take these overheads when there is real reason to do so.
>>>
>>>
>>>
>>> Simon
>>>
>>>
>>>
>>> *From:* ghc-devs [mailto:ghc-devs-boun...@haskell.org] *On Behalf Of *Alan
>>> & Kim Zimmerman
>>> *Sent:* 12 December 2014 14:22
>>> *To:* ghc-devs@haskell.org
>>> *Subject:* D538 and compiler performance spec
>>>
>>>
>>>
>>> For API annotations I am working in the details of RdrNames, which come
>>> in a bewildering variety of syntactic forms.
>>>
>>> My latest change causes perf/compiler to fail, with
>>>
>>> bytes allocated value is too high:
>>> Expectedparsing001(normal) bytes allocated: 587079016 +/-5%
>>> Lower bound parsing001(normal) bytes allocated: 557725065
>>> Upper bound parsing001(normal) bytes allocated: 616432967
>>> Actual  parsing001(normal) bytes allocated: 704940512
>>> Deviation   parsing001(normal) bytes allocated:  20.1 %
>>>
>>> I am now adding an `AnnVal` to every RdrName, to be able to separate it
>>> out from any decoration, such as surrounding backticks or parens.
>>>
>>> Is this a problem? The alternative would be to add a SourceText field to
>>> RdrName.
>>>
>>> Alan
>>>
>>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Program runs out of memory using GHC 7.6.3

2014-12-13 Thread Matthias Fischmann

Hi David,

I don't think this is a ghc issue.

I suspect you have too many unevaluated function calls lying around
(this would cause the runtime to run out of *stack* as opposed to
*heap*).  Different versions of ghc perform different optimizations on
your code, and 7.8 knows a way to fix it that 7.6 doesn't know.

This is usually solved by adding strictness: Instead of letting the
unevaluated function calls pile up, you force them (e.g. with `print`
or `Control.DeepSeq.deepseq`).

I would take a closer look at your makeCounts function: you call
traverse the input list, and traverse the entire list (starting from
each element) again in each round.  Either you should find a way to
iterate only once and accumulate all the data you need, or you should
start optimizing there.

hope this helps,
cheers,
matthias


On Sat, Dec 13, 2014 at 02:06:52AM -0700, David Spies wrote:
> Date: Sat, 13 Dec 2014 02:06:52 -0700
> From: David Spies 
> To: "ghc-devs@haskell.org" 
> Subject: Program runs out of memory using GHC 7.6.3
>
> I have a program I submitted for a Kattis problem:
> https://open.kattis.com/problems/digicomp2
> But I got memory limit exceeded.  I downloaded the test data and ran the
> program on my own computer without problems.  Eventually I found out that
> when compiling with GHC 7.6.3 (the version Kattis uses) rather than 7.8.3,
> this program runs out of memory.
> Can someone explain why it only works on the later compiler?  Is there a
> workaround so that I can submit to Kattis?
>
> Thanks,
> David

> module Main(main) where
>
> import   Control.Monad
> import   Data.Array
> import qualified Data.ByteString.Char8 as BS
> import   Data.Int
> import   Data.Maybe
>
> readAsInt :: BS.ByteString -> Int
> readAsInt = fst . fromJust . BS.readInt
>
> readVert :: IO Vert
> readVert = do
>   [s, sl, sr] <- liftM BS.words BS.getLine
>   return $ V (fromBS s) (readAsInt sl) (readAsInt sr)
>
> main::IO()
> main = do
>   [n, m64] <- liftM (map read . words) getLine :: IO [Int64]
>   let m = fromIntegral m64 :: Int
>   verts <- replicateM m readVert
>   let vside = map getSide verts
>   let vpar = concat $ zipWith makeAssoc [1..] verts
>   let parArr = accumArray (flip (:)) [] (1, m) vpar
>   let counts = makeCounts n m $ elems parArr
>   let res = zipWith doFlips counts vside
>   putStrLn $ map toChar res
>
> doFlips :: Int64 -> Side -> Side
> doFlips n
>   | odd n = flipSide
>   | otherwise = id
>
> makeCounts :: Int64 -> Int -> [[(Int, Round)]] -> [Int64]
> makeCounts n m l = tail $ elems res
>   where
> res = listArray (0, m) $ 0 : n : map makeCount (tail l)
> makeCount :: [(Int, Round)] -> Int64
> makeCount = sum . map countFor
> countFor :: (Int, Round) -> Int64
> countFor (i, Up) = ((res ! i) + 1) `quot` 2
> countFor (i, Down) = (res ! i) `quot` 2
>
> fromBS :: BS.ByteString -> Side
> fromBS = fromChar . BS.head
>
> fromChar :: Char -> Side
> fromChar 'L' = L
> fromChar 'R' = R
> fromChar _ = error "Bad char"
>
> toChar :: Side -> Char
> toChar L = 'L'
> toChar R = 'R'
>
> makeAssoc :: Int -> Vert -> [(Int, (Int, Round))]
> makeAssoc n (V L a b) = filtPos [(a, (n, Up)), (b, (n, Down))]
> makeAssoc n (V R a b) = filtPos [(a, (n, Down)), (b, (n, Up))]
>
> filtPos :: [(Int, a)] -> [(Int, a)]
> filtPos = filter ((> 0) . fst)
>
> data Vert = V !Side !Int !Int
>
> getSide :: Vert -> Side
> getSide (V s _ _) = s
>
> data Side = L | R
>
> data Round = Up | Down
>
> flipSide :: Side -> Side
> flipSide L = R
> flipSide R = L


> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Program runs out of memory using GHC 7.6.3

2014-12-13 Thread Mikolaj Konarski
tt may be that GHC 7.8 optimizes the program better.
Compile with -O0 and see if it runs out of memory, too.
If so, you can just optimize the program by hand.
I'd suggest making a heap profilie with -O0 or in GHC 7.6
and finding out where the memory goes.

Of course, it's possible you've hit a compiler bug,
but it makes sense not to start with that assumption.

Have fun,
Mikolaj

On Sat, Dec 13, 2014 at 10:06 AM, David Spies  wrote:
> I have a program I submitted for a Kattis problem:
> https://open.kattis.com/problems/digicomp2
> But I got memory limit exceeded.  I downloaded the test data and ran the
> program on my own computer without problems.  Eventually I found out that
> when compiling with GHC 7.6.3 (the version Kattis uses) rather than 7.8.3,
> this program runs out of memory.
> Can someone explain why it only works on the later compiler?  Is there a
> workaround so that I can submit to Kattis?
>
> Thanks,
> David
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs