On Aug 31, 2012, at 7:39 PM, Rodrigo Kumpera wrote:
> Unless you use explicit memory management or some other trick, such scheme is
> not any better than what both collectors already do.
>
> Both use a size-segregated allocator for the major heap which works very much
> like an object pool bas
Unless you use explicit memory management or some other trick, such scheme
is not any better than what both collectors already do.
Both use a size-segregated allocator for the major heap which works very
much like an object pool based on size.
Object pools work when allocating memory is very expe
I use object pools where I have control over the lifecycle of objects used with
high frequency.In the application I was discussing with respect to sgen, it
is very hard to explicitly use object pools (nor can I use structs in this
case).
I think Miguel mentioned briefly in a blog, but would
With this specific application, (which is single threaded), I have a "volatile"
working set of ~2GB . By volatile I mean that these are not application
lifetime objects, rather will be disposed at some point during evaluation.
More specifically, I read 1.6TB of data incrementally into 1600 ti
There are two situations that make sgen slower than boehm.
The first is a non-generational workload. If your survivor rate is too
high, a generational collector
can't compete with single space one like boehm.
The second one is if you have too much of the old generation pointing to
young objects c
HI,
sgen is now working for me (thanks to a subtle bug fix for thread-local-storage
by Zoltan). However, for one application, sgen is 25% slower than the same
with the boehm collector. I am processing some GBs of timeseries data, though
only evaluating a window at a time. As the window re