Thanks you for the suggestions both of you.
After some more profiling, I've come to realise that most of the
garbage is created from allocations when serialising messages to write
to the socket. I am going to try and reduce this next. If that does
not help reduce the latency spikes, I will work th
By dumping metrics, I mean essentially the same as the ghc-events-analyze
annotations but with any more information that is useful for the
investigation. In particular, if you have a message id, include that. You
may also want to annotate thread names with GHC.Conc.labelThread. You may
also want
On Tue, Sep 29, 2015 at 2:03 AM, Will Sewell wrote:
> * I then tried a value of -A2048k because he also said "using a very
> large young generation size might outweigh the cache benefits". I
> don't exactly know what he meant by "a very large young generation
> size", so I guessed at this value.
That's interesting. I have not done this kind of work before, and had
not come across CDFs. I can see why it make sense to look at the mean
and tail.
Your assumption is correct. The messages have a similar cost, which is
why the graph I posted is relatively flat most of the time. The spikes
sugges
Will
I was trying to get a feeling for what those coloured squares actually
denoted - typically we examine this sort of performance information
as CDFs (cumulative distribution functions[1]) trying to pull apart the
issues that “mean” effecting (i.e typical path through code/system) and
those that
Thank you for the reply Neil.
The spikes are in response time. The graph I linked to shows the
distribution of response times in a given window of time (darkness of
the square is the number of messages in a particular window of
response time). So the spikes are in the mean and also the max
respons
Will
is your issue with the spikes i response time, rather than the mean values?
If so, once you’ve reduced the amount of unnecessary mutation, you might want
to take more control over when the GC is taking place. You might want to
disable
GC on timer (-I0) and force GC to occur at points you s
Thanks for the reply John. I will have a go at doing that. What do you
mean exactly by dumping metrics, do you mean measuring the latency
within the program, and dumping it if it exceeds a certain threshold?
And from the answers I'm assuming you believe it is the GC that is
most likely causing the
Thanks for the reply Greg. I have already tried tweaking these values
a bit, and this is what I found:
* I first tried -A256k because the L2 cache is that size (Simon Marlow
mentioned this can lead to good performance
http://stackoverflow.com/a/3172704/1018290)
* I then tried a value of -A2048k be
Try Greg's recommendations first. If you still need to do more
investigation, I'd recommend that you look at some samples with either
threadscope or dumping the eventlog to text. I really like
ghc-events-analyze, but it doesn't provide quite the same level of detail.
You may also want to dump som
On Mon, Sep 28, 2015 at 9:08 AM, Will Sewell wrote:
> If it is the GC, then is there anything that can be done about it?
- Increase value of -A (the default is too small) -- best value for this
is L3 cache size of the chip
- Increase value of -H (total heap size) -- this will use more
Hi, I was told in the #haskell IRC channel that this would be a good
place to ask this question, so here goes!
We’re writing a low-latency messaging system. The problem is we are
getting a lot of latency spikes. See this image:
http://i.imgur.com/GZ0Ek98.png (yellow to red is the 90th percentile),
12 matches
Mail list logo