Hi devs,

I've been working on a benchmarking framework for tinkerpop, specifically
the Java driver.

The idea is to have a benchmarking framework that a TinkerPop user can
target their instance of gremlin-server with (can be any provider) and what
this will allow them to do is fix some of their configs of the driver while
having others as variables. The framework will then run through a bunch of
different settings, recording latency and throughput.

The output of the benchmarking framework would be guidance for the user of
the Java driver for optimal configuration for both latency and throughput,
that they can then use to optimize their workload outside the framework.

A provider could also use this to manually adjust
gremlinPool/threadPoolWorkers/etc and run the framework under different
settings to optimize throughput and latency there as well.

The benchmark is built on JMH and is build into a docker container so it is
very easy to use. The configs are passed at runtime, so a user would just
call a docker build then docker run script, with the configs setup in the
docker config.

We could also add other benchmarks at any scale to the framework that allow
benchmark publishing from providers who wish to participate.

Anyone have any thoughts on this?

Cheers,
Lyndon
-- 

*Lyndon Bauto*
*Senior Software Engineer*
*Aerospike, Inc.*
www.aerospike.com
lba...@aerospike.com

Reply via email to