I have some old numbers from a prior experiment with 1 TB heap. Might be
sufficient to say neither CMS nor G1 survived until the end of the test,
which was a simple LTT load ... of a billion row plus in memory table on
heap in a single regionserver, but that is a detail. :-) I might have time
to re
Shenandoah GC is interesting. Do you have any comparisons to CMS or G1? Are
y'all running Shenandoah in production already?
On Tue, Jul 31, 2018 at 9:37 AM, Josh Elser wrote:
> +1, great stuff! Thanks to you for doing this testing and sharing results
> with us all.
>
>
> On 7/30/18 10:38 PM, Sta
+1, great stuff! Thanks to you for doing this testing and sharing
results with us all.
On 7/30/18 10:38 PM, Stack wrote:
Thanks Andy. Looks good.
Maybe next time add -p clientbuffering=true ?
Good on you,
S
On Mon, Jul 30, 2018 at 6:55 PM Andrew Purtell wrote:
A couple of notes and general
Thanks Andy. Looks good.
Maybe next time add -p clientbuffering=true ?
Good on you,
S
On Mon, Jul 30, 2018 at 6:55 PM Andrew Purtell wrote:
>
> A couple of notes and general observations.
>
> Note all instances remained up for the entire duration of testing including
> burn in (all tests ran on
A couple of notes and general observations.
Note all instances remained up for the entire duration of testing including
burn in (all tests ran on the same hardware), and HDFS volumes were built
on locally attached storage (hence C3 generation instances), so I
controlled as much as possible for sy
Instance OS:
Linux version 4.14.55-62.37.amzn1.x86_64
Instance
Type:
master: c3.8xlarge, regionservers:
c3.8xlarge x 5
, client c3.8xlarge
Regionserver JVM:
OpenJDK Runtime Environment (build 1.8.0_172-shenandoah-b11)
64-Bit Server VM (build 25.172-b11, mixed mode)
Regionserv