I think Sanne's right here, any differences in such large scale test are hard 
to decipher.

Also, as mentioned in a previous email, my view on its usage is same as Sanne's:

* Definitely in APIs/SPIs.
* Be gentle with it internals.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 18 May 2017, at 14:35, Sanne Grinovero <sa...@infinispan.org> wrote:
> 
> Hi Sebastian,
> 
> sorry but I think you've been wasting time, I hope it was fun :) This is not 
> the right methodology to "settle" the matter (unless you want Radim's eyes to 
> get bloody..).
> 
> Any change in such a complex system will only affect the performance metrics 
> if you're actually addressing the dominant bottleneck. In some cases it might 
> be CPU, like if your system is at 90%+ CPU then it's likely that reviewing 
> the code to use less CPU would be beneficial; but even that can be 
> counter-productive, for example if you're having contention caused by 
> optimistic locking and you fail to address that while making something else 
> "faster" the performance loss on the optimistic lock might become asymptotic.
> 
> A good reason to avoid excessive usage of Optional (and *excessive* doesn't 
> mean a couple dozen in a millions lines of code..) is to not run out of eden 
> space, especially for all the code running in interpreted mode.
> 
> In your case you've been benchmarking a hugely complex beast, not least over 
> REST! When running the REST Server I doubt that allocation in eden is your 
> main problem. You just happened to have a couple Optionals on your path; sure 
> performance changed but there's no enough data in this way to figure out what 
> exactly happened:
>  - did it change at all or was it just because of a lucky optimisation? (The 
> JIT will always optimise stuff differently even when re-running the same code)
>  - did the overall picture improve because this code became much *less* 
> slower?
> 
> The real complexity in benchmarking is to accurately understand why it 
> changed; this should also tell you why it didn't change more, or less..
> 
> To be fair I actually agree that it's very likely that C2 can make any 
> performance penalty disappear.. that's totally possible, although it's 
> unlikely to be faster than just reading the field (assuming we don't need to 
> do branching because of null-checks but C2 can optimise that as well).
> Still this requires the code to be optimised by JIT first, so it won't 
> prevent us from creating a gazillion of instances if we abuse its usage 
> irresponsibly. Fighting internal NPEs is a matter of writing better code; I'm 
> not against some "Optional" being strategically placed but I believe it's 
> much nicer for most internal code to just avoid null, use "final", and 
> initialize things aggressively.
> 
> Sure use Optional where it makes sense, probably most on APIs and SPIs, but 
> please don't go overboard with it in internals. That's all I said in the 
> original debate.
> 
> In case you want to benchmark the impact of Optional make a JMH based 
> microbenchmark - that's interesting to see what C2 is capable of - but even 
> so that's not going to tell you much on the impact it would have to patch 
> thousands of code all around Infinispan. And it will need some peer review 
> before it can tell you anything at all ;)
> 
> It's actually a very challenging topic, as we produce libraries meant for 
> "anyone to use" and don't get to set the hardware specification requirements 
> it's hard to predict if we should optimise the system for this/that resource 
> consumption. Some people will have plenty of CPU and have problems with us 
> needing too much memory, some others will have the opposite.. the real 
> challenge is in making internals "elastic" to such factors and adaptable 
> without making it too hard to tune.
> 
> Thanks,
> Sanne
> 
> 
> 
> On 18 May 2017 at 12:30, Sebastian Laskawiec <slask...@redhat.com> wrote:
> Hey!
> 
> In our past we had a couple of discussions about whether we should or should 
> not use Optionals [1][2]. The main argument against it was performance. 
> 
> On one hand we risk additional object allocation (the Optional itself) and 
> wrong inlining decisions taken by C2 compiler [3]. On the other hand we all 
> probably "feel" that both of those things shouldn't be a problem and should 
> be optimized by C2. Another argument was the Optional's doesn't give us 
> anything but as I checked, we introduced nearly 80 NullPointerException bugs 
> in two years [4]. So we might consider Optional as a way of fighting those 
> things. The final argument that I've seen was about lack of higher order 
> functions which is simply not true since we have #map, #filter and #flatmap 
> functions. You can do pretty amazing things with this.
> 
> I decided to check the performance when refactoring REST interface. I created 
> a PR with Optionals [5], ran performance tests, removed all Optionals and 
> reran tests. You will be surprised by the results [6]:
> 
> Test case
> With Optionals [%]    Without Optionals
> Run 1 Run 2   Avg     Run 1   Run 2   Avg
> Non-TX reads 10 threads
> Throughput    32.54   32.87   32.71   31.74   34.04   32.89
> Response time -24.12  -24.63  -24.38  -24.37  -25.69  -25.03
> Non-TX reads 100 threads
> Throughput    6.48    -12.79  -3.16   -7.06   -6.14   -6.60
> Response time -6.15   14.93   4.39    7.88    6.49    7.19
> Non-TX writes 10 threads
> Throughput    9.21    7.60    8.41    4.66    7.15    5.91
> Response time -8.92   -7.11   -8.02   -5.29   -6.93   -6.11
> Non-TX writes 100 threads
> Throughput    2.53    1.65    2.09    -1.16   4.67    1.76
> Response time -2.13   -1.79   -1.96   0.91    -4.67   -1.88
> 
> I also created JMH + Flight Recorder tests and again, the results showed no 
> evidence of slow down caused by Optionals [7].
> 
> Now please take those results with a grain of salt since they tend to drift 
> by a factor of +/-5% (sometimes even more). But it's very clear the 
> performance results are very similar if not the same.
> 
> Having those numbers at hand, do we want to have Optionals in Infinispan 
> codebase or not? And if not, let's state it very clearly (and write it into 
> contributing guide), it's because we don't like them. Not because of 
> performance.
> 
> Thanks,
> Sebastian
> 
> [1] http://lists.jboss.org/pipermail/infinispan-dev/2017-March/017370.html
> [2] http://lists.jboss.org/pipermail/infinispan-dev/2016-August/016796.html
> [3] http://vanillajava.blogspot.ro/2015/01/java-lambdas-and-low-latency.html
> [4] 
> https://issues.jboss.org/issues/?jql=project%20%3D%20ISPN%20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E%20%22NullPointerException%22%20AND%20created%20%3E%3D%202015-04-27%20AND%20created%20%3C%3D%202017-04-27
> [5] https://github.com/infinispan/infinispan/pull/5094
> [6] 
> https://docs.google.com/a/redhat.com/spreadsheets/d/1oep6Was0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing
> [7] https://github.com/infinispan/infinispan/pull/5094#issuecomment-296970673
> -- 
> SEBASTIAN ŁASKAWIEC
> INFINISPAN DEVELOPER
> Red Hat EMEA
> 
> 
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to