On Sat, 23 Dec 2023 15:02:28 GMT, Vladimir Sitnikov <vsitni...@openjdk.org> 
wrote:

>> Why do you want to put *that* in a test case at all? So far I did not come 
>> across performance-based *tests* (only *benchmarks*), i. e. nobody ever 
>> requested that from me in any of my NIO optimizations. Typically it was 
>> sufficient to document the JMH results before and after a PR *just once* 
>> (not dynamically in the form of a test).
>> 
>> For *this* PR it is sufficient to proof what the PR's title says: Proof the 
>> *direct* call (i. e. the *absence of a copy*) and we're good to merge. I 
>> already explained how to proof *that*.
>
>>Typically it was sufficient to document the JMH results before and after a PR 
>>just once (not dynamically in the form of a test).
> 
> The problem with "results before and after a PR just once" is those 
> benchmarks will not automatically fail the build if somebody else breaks the 
> optimization unintentionally. In other words, benchmarks do not serve as a 
> safety net.
> 
> At the same time, if the test code were to ensure "less than 20 bytes 
> allocated" in a common case of transferring data from `BufferedInputStream` 
> to `ByteArrayOutputStream`, then it would be a nice safety net so the 
> optimization will not be broken unintentionally.
> 
>>The latter should be rather easy to check: Invoke transferTo(out) two times 
>>in a row and compare the identity of the two byte arrays passed to 
>>out.write(). If they stay the same, then apparently no temporary copy was 
>>created. Two achieve this, the BIS must be wrapper around an extendable input 
>>stream (like FileInputStream) so between calls the stream could get extended 
>>(e. g. by writing into the file)
> 
> Markus, I am afraid you missed that anything that "extends FileOutputStream" 
> (I assume you mean `FileOutputStream`, not `FileInputStream`) would **not** 
> qualify as a trusted stream, and `transferTo` will make a copy. The current 
> check is `clazz == FileOutputStream.class`

I think we should not overcomplicate things. Nobody really asked for actually 
failing the test when either the number of bytes allocated or the time to 
allocate it execeeds an expected limit. The requirement for the test solely is 
that it proofs the claim outlined by the topic: Proof that 
BufferedInputStream.bug is used directly and fail if not. Nothing else.  
Everthing beyond that is nice to have as we did not have such a test for the 
similar change at BAIS.

As long as it is proven that we use *the same* buffer always, it is implicitly 
proven that the original claim of the published benchmarks still holds true. 
There is no need to re-evaluate the actual benchmark again and again, it will 
not provide any additional benefit but would make the test much more complex, 
less reliable, and increase the risk of false-fail in automated test farms 
using alternating hardware.

I did not mean `FileOutputStream` but really `FileInputStream`: You have a file 
as an input, so you can extend the file between two calls to `transferTo(out)` 
to be actually able to execute `transferTo` two times in a row. Nevertheless 
you are right, the problem with my proposed solution is that the mock receiver 
needed in that scenario (the code that records the identity of the buffer) 
would not be in the set of trusted classes. Actually I do not see how we can 
solve this.

@bplb In https://github.com/openjdk/jdk/pull/16893 we actually did not check if 
the PR is actually effective, but solely if the buffer is effectively 
unchanged. The same could be done here. Is it OK for you if we skip checking 
the actual efficiency of the *actual target* of this PR itself (i. e. still not 
checking if it is *the same* buffer)?

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/16879#discussion_r1435679827

Reply via email to