[
https://issues.apache.org/jira/browse/LUCENE-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13735926#comment-13735926
]
ASF subversion and git services commented on LUCENE-5161:
---------------------------------------------------------
Commit 1512729 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1512729 ]
LUCENE-5161: set sane default readChunkSizes, make the setter work, and test
the chunking
> review FSDirectory chunking defaults and test the chunking
> ----------------------------------------------------------
>
> Key: LUCENE-5161
> URL: https://issues.apache.org/jira/browse/LUCENE-5161
> Project: Lucene - Core
> Issue Type: Improvement
> Reporter: Robert Muir
> Assignee: Robert Muir
> Attachments: LUCENE-5161.patch, LUCENE-5161.patch, LUCENE-5161.patch
>
>
> Today there is a loop in SimpleFS/NIOFS:
> {code}
> try {
> do {
> final int readLength;
> if (total + chunkSize > len) {
> readLength = len - total;
> } else {
> // LUCENE-1566 - work around JVM Bug by breaking very large
> reads into chunks
> readLength = chunkSize;
> }
> final int i = file.read(b, offset + total, readLength);
> total += i;
> } while (total < len);
> } catch (OutOfMemoryError e) {
> {code}
> I bet if you look at the clover report its untested, because its fixed at
> 100MB for 32-bit users and 2GB for 64-bit users (are these defaults even
> good?!).
> Also if you call the setter on a 64-bit machine to change the size, it just
> totally ignores it. We should remove that, the setter should always work.
> And we should set it to small values in tests so this loop is actually
> executed.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]