[ 
https://issues.apache.org/jira/browse/TINKERPOP-1233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15210433#comment-15210433
 ] 

Ted Wilmes commented on TINKERPOP-1233:
---------------------------------------

We can definitely add Gremlin Server integration level benchmarks.  My initial 
thought was that this benchmark module can serve 2 purposes.  First, we can use 
it to track and identify performance improvements or regressions in key areas 
from release to release.  It may be good to set the first set of key areas to 
current hotspots in the codebase.  We could do some further profiling of 
Gremlin Server and the other components to narrow in on these focus areas.  
This benchmarking could then lead us to some high value improvements we can 
make in the near term.

Secondly, I think it could be a handy way to quickly measure different 
approaches while doing development.  For example, say I'm writing some code and 
have two approaches in mind, but I'm not sure which will be more performant.  
I'd write up some benchmarks quick and test the approaches out.  I decide on 
one and move on, but probably won't include the benchmarks that I used to make 
my decision in the PR.  I think merging those sort of a/b benchmarks into our 
main code base doesn't serve much purpose and will add maintenance burden.  
However, it may be good to attach the results in the ticket to document the 
decision making process.

> Gremlin-Benchmark wish list.
> ----------------------------
>
>                 Key: TINKERPOP-1233
>                 URL: https://issues.apache.org/jira/browse/TINKERPOP-1233
>             Project: TinkerPop
>          Issue Type: Improvement
>          Components: benchmark
>    Affects Versions: 3.2.0-incubating
>            Reporter: Marko A. Rodriguez
>
> [~twilmes] has developed {{gremlin-benchmark}} which is slated for 3.2.0 
> (TINKERPOP-1016). This is really good as now we can ensure the Gremlin 
> traversal machine only speeds up with each release. Here is a collection of 
> things I would like to be able to do with {{gremlin-benchmark}}.
> ----
> *Benchmarks in the Strategy Tests*
> {code}
> // ensure that traversalA is at least 1.5 times faster than traversalB
> assertTrue(Benchmark.compare(traversalA,traversalB) > 1.50d) 
> {code}
> With this, I can have an {{OptimizationStrategy}} applied to {{traversalA}} 
> and not to {{traversalB}} and prove via "mvn clean install" that the strategy 
> is in fact "worth it." I bet there are other good static methods we could 
> create. Hell, why not just have a {{BenchmarkAsserts}} that we can statically 
> import like JUnit {{Asserts}}. Then its just:
> {code}
> assertFaster(traversalA,traversalB,1.50d)
> assertSmaller(traversalA,traversalB) // memory usage or object creation?
> assertTime(traversal, 1000, TimeUnits.MILLISECONDS) // has to complete in 1 
> second? 
> ... ?
> {code}
> Its a little scary as not all computers are the same, but it would be nice to 
> know that we have tests for space and time costs.
> ----
> *Benchmarks saved locally over the course of a release*
> This is tricky, but it would be cool if local folders (not to GitHub) were 
> created like this:
> {code}
> tinkerpop3/gremlin-benchmark/benchmarks/g_V_out_out_12:23:66UT23.txt
> {code}
> Then a test case could ensure that all newer runs of that benchmark are 
> faster than older ones. If its, lets say, 10%+ slower, {{Exception}} and test 
> fails. ??
> What else can we do? Can we know whether a certain area of code is faster? 
> For instance, strategy application or requirements aggregation? If we can 
> introspect like that, that would be stellar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to