Re: [Haskell-cafe] FRP memory leaks
I was looking at reactive-banana and netwire (also Bacon.js). Thank you for great explanation! Now I see that this kind of code must be generating memory leak and solution using Bechavior was not clear for me (as I am really new to the FRP stuff). -- Łukasz Dąbek 2013/6/6 John Lato jwl...@gmail.com Which FRP frameworks have you been looking at? In my experience, the most publicized leaks have been time leaks, which are a particular type of memory leak related to switching. However, the presence of time leaks mostly arises in terms of the FRP implementation. Arrowized FRP (e.g. Yampa, netwire) do not typically suffer from this for example. Some libraries that implement the semantics of Conal Elliott's Push-pull functional reactive programming (or similar semantics) have been susceptible to this, however recent implementations are not. Sodium, elerea, and reactive-banana for example have generally overcome the worst issues present in earlier systems. Leaks can still be present in current systems of course, but now they're generally due to the programmer unintentionally retaining data in a case that's much simpler to reason about. That is, the situation today is more similar to forgetting to use deepseq or similar, rather than the prior leaks that were very difficult to reason about. I think the most common current issue is that a very natural way of accumulating reactive events across time can leak. Suppose you have a library of reactive widgets, where each widget has an associated stream of IO actions that you want to run. E.g. clicking a button prints it, sliding a scale prints the value, etc. class Actionable a where actions :: a - Event (IO ()) suppose you have a collection that allows you to add/remove Actionable things to it (e.g. a button panel). This panel has an action stream that's simply the concatenation of those of its components. One possible implementation looks like this: data ButtonPanel = ButtonPanel (Event (IO ()) emptyPanel = ButtonPanel mempty addActionable :: Actionable a = ButtonPanel - a - ButtonPanel addActionable (ButtonPanel e) a = ButtonPanel (e actions a) I've omitted all the parts for wiring up the gui, but suppose they're handled also, and removing a button from the panel just removes it from the gui and destroys the widget. After that, the button's event stream is empty, so you can just leave the ButtonPanel's event stream unchanged, because the destroyed button will never fire. This is a memory leak. The destroyed Button's event stream is still referenced in the ButtonPanel event stream, so data related to it never gets freed. Over time your FRP network will grow, and eventually you'll hit scaling problems. The proper solution in this instance is to keep a list of each button's event stream within the button panel. It's ok to keep a cached aggregate stream, but that cache needs to be re-built when a button is removed. This is usually fairly natural to do with FRP, but your ButtonPanel may look like this instead: data ButtonPanel = ButtonPanel (Map Key (Event (IO ())) addActionable :: Actionable a = ButtonPanel- Key - a - ButtonPanel removeActionable :: ButtonPanel - Key - ButtonPanel and now you need to manage some sort of Key for collection elements. This style isn't entirely idiomatic FRP. Instead of these functions, you could have all your modifications handled via the FRP framework. For example, data ButtonPanel = ButtonPanel (Behavior (Map Key (Event (IO () buttonPanel :: Actionable a = Event (Key,a) - Event Key - ButtonPanel but you still need to be aware that objects can reference older objects. Behaviors are frequently created via accumulators over events (e.g. accumB), and if the accumulation is doing something like 'mappend', a memory leak is likely. Basically, the issue is that when you're accumulating reactive stuff over time, you need to be sure that your accumulator doesn't reference data that is otherwise expired. This example uses a push-pull style pseudocode because that's what I'm most familiar with. I'm not entirely show how (or if) this translates to arrowized FRP, although it wouldn't surprise me if there's a similar pattern. On Thu, Jun 6, 2013 at 2:50 AM, Łukasz Dąbek sznu...@gmail.com wrote: Hello, Cafe! I've heard that one of the problems of FRP (Functional Reactive Programming) is that it's easy to create memory leaks. However I cannot find any natural examples of such leaks. Could anybody post some (pseudo)code demonstrating this phenomenon? Preferably something that arises when one is writing bigger applications in FRP style. Thanks in advance! -- Łukasz Dąbek ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe
[Haskell-cafe] FRP memory leaks
Hello, Cafe! I've heard that one of the problems of FRP (Functional Reactive Programming) is that it's easy to create memory leaks. However I cannot find any natural examples of such leaks. Could anybody post some (pseudo)code demonstrating this phenomenon? Preferably something that arises when one is writing bigger applications in FRP style. Thanks in advance! -- Łukasz Dąbek ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Concurrency performance problem
2013/3/5 Nathan Howell nathan.d.how...@gmail.com Depends on the application, of course. The (on by default) parallel GC tends to kill performance for me... you might try running both with +RTS -sstderr to see if GC time is significantly higher, and try adding +RTS -qg1 if it is. You are correct: parallel GC is slowing computation down. After some experiments I can produce two behaviors: use single threaded GC (multithreaded version is slowed down by factor of 5 - but single threaded backs to normal) or increase heap size (multithreaded version slows down by factor of 2, single threaded version runs normally). I guess I must live with this ;) -- Łukasz Dąbek ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Concurrency performance problem
Hello Cafe! I have a problem with following code: http://hpaste.org/83460. It is a simple Monte Carlo integration. The problem is that when I run my program with +RTS -N1 I get: Multi 693204.039020917 8.620632s Single 693204.039020917 8.574839s End And with +RTS -N4 (I have four CPU cores): Multi 693204.0390209169 11.877143s Single 693204.039020917 11.399888s End I have two questions: 1) Why performance decreases when I add more cores for my program? 2) Why performance of single threaded integration also changes with number of cores? Thanks for all answers, Łukasz Dąbek. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Concurrency performance problem
What do you exactly mean? I have included link to full source listing: http://hpaste.org/83460. -- Łukasz Dąbek 2013/3/4 Don Stewart don...@gmail.com: Depends on your code... On Mar 4, 2013 6:10 PM, Łukasz Dąbek sznu...@gmail.com wrote: Hello Cafe! I have a problem with following code: http://hpaste.org/83460. It is a simple Monte Carlo integration. The problem is that when I run my program with +RTS -N1 I get: Multi 693204.039020917 8.620632s Single 693204.039020917 8.574839s End And with +RTS -N4 (I have four CPU cores): Multi 693204.0390209169 11.877143s Single 693204.039020917 11.399888s End I have two questions: 1) Why performance decreases when I add more cores for my program? 2) Why performance of single threaded integration also changes with number of cores? Thanks for all answers, Łukasz Dąbek. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Concurrency performance problem
Thank you for your help! This solved my performance problem :) Anyway, the second question remains. Why performance of single threaded calculation is affected by RTS -N parameter. Is GHC doing some parallelization behind the scenes? -- Łukasz Dąbek. 2013/3/4 Don Stewart don...@gmail.com: Apologies, didn't see the link on my phone :) As the comment on the link shows, youre accidentally migrating unevaluated work to the main thread, hence no speedup. Be very careful with evaluation strategies (esp. lazy expressions) around MVar and TVar points. Its too easy to put a thunk in one. The strict-concurrency package is one attempt to invert the conventional lazy box, to better match thge most common case. On Mar 4, 2013 7:25 PM, Łukasz Dąbek sznu...@gmail.com wrote: What do you exactly mean? I have included link to full source listing: http://hpaste.org/83460. -- Łukasz Dąbek 2013/3/4 Don Stewart don...@gmail.com: Depends on your code... On Mar 4, 2013 6:10 PM, Łukasz Dąbek sznu...@gmail.com wrote: Hello Cafe! I have a problem with following code: http://hpaste.org/83460. It is a simple Monte Carlo integration. The problem is that when I run my program with +RTS -N1 I get: Multi 693204.039020917 8.620632s Single 693204.039020917 8.574839s End And with +RTS -N4 (I have four CPU cores): Multi 693204.0390209169 11.877143s Single 693204.039020917 11.399888s End I have two questions: 1) Why performance decreases when I add more cores for my program? 2) Why performance of single threaded integration also changes with number of cores? Thanks for all answers, Łukasz Dąbek. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Concurrency performance problem
2013/3/4 Johan Tibell johan.tib...@gmail.com: I believe it's because -N makes GHC use the threaded RTS, which is different from the non-threaded RTS and has some overheads therefore. That's interesting. Can you recommend some reading materials about this? Besides GHC source, of course ;) Explanation of why decrease in performance is proportional to number of cores would be great. -- Łukasz Dąbek ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Concurrency performance problem
2013/3/4 bri...@aracnet.com: do you have a link to the new code ? Diff is at the bottom of original code: http://hpaste.org/83460. If you just pass -N, GHC automatically sets the number of threads based on the number of cores on your machine. Yes, I know that. I am just wondering why seemingly single threaded computation (look at singleThreadIntegrate in source code from first post) runs slower with increasing number of cores available (set through -N option). -- Łukasz Dąbek ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe