mcimadamore commented on pull request #518:
URL: https://github.com/apache/lucene/pull/518#issuecomment-1005995125


   > From what I have learned, copy operations have high overhead because:
   > 
   >     * they are not hot, so aren't optimized so fast
   > 
   >     * when not optimized, the setup cost is high (lots of class checks to 
get array type, decision for swapping bytes). This is especially heavy for 
small arrays.
   
   Hi, I'm not sure as to why copy operations should be slower in the memory 
access API then with the ByteBuffer API. I would expect most of the checks to 
be similar (except for the liveness tests of the segment involved). I do recall 
that the ByteBuffer API does optimize bulk copy for very small buffers (I don't 
recall what the limit is, but it was very very low, like 4 elements or 
something).
   
   In principle, this JVM fix (as per 18) should help too:
   https://bugs.openjdk.java.net/browse/JDK-8269119
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org

Reply via email to